Feb 13 19:18:39.906186 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:18:39.906208 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:39.906219 kernel: BIOS-provided physical RAM map: Feb 13 19:18:39.906225 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:18:39.906232 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:18:39.906238 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:18:39.906245 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:18:39.906252 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:18:39.906258 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:18:39.906265 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:18:39.906271 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:18:39.906280 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:18:39.906286 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:18:39.906293 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:18:39.906301 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:18:39.906308 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:18:39.906317 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:18:39.906324 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:18:39.906331 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:18:39.906338 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:18:39.906345 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:18:39.906351 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:18:39.906358 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:18:39.906365 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:18:39.906372 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:18:39.906379 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:18:39.906386 kernel: NX (Execute Disable) protection: active Feb 13 19:18:39.906395 kernel: APIC: Static calls initialized Feb 13 19:18:39.906402 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:18:39.906409 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:18:39.906416 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:18:39.906422 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:18:39.906429 kernel: extended physical RAM map: Feb 13 19:18:39.906436 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:18:39.906443 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:18:39.906450 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:18:39.906457 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:18:39.906464 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:18:39.906471 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:18:39.906480 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:18:39.906491 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:18:39.906498 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:18:39.906505 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:18:39.906512 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:18:39.906519 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:18:39.906529 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:18:39.906536 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:18:39.906543 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:18:39.906550 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:18:39.906558 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:18:39.906565 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:18:39.906572 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:18:39.906579 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:18:39.906586 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:18:39.906596 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:18:39.906603 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:18:39.906610 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:18:39.906617 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:18:39.906624 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:18:39.906631 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:18:39.906638 kernel: efi: EFI v2.7 by EDK II Feb 13 19:18:39.906646 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:18:39.906653 kernel: random: crng init done Feb 13 19:18:39.906660 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:18:39.906667 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:18:39.906675 kernel: secureboot: Secure boot disabled Feb 13 19:18:39.906693 kernel: SMBIOS 2.8 present. Feb 13 19:18:39.906701 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:18:39.906708 kernel: Hypervisor detected: KVM Feb 13 19:18:39.906715 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:18:39.906722 kernel: kvm-clock: using sched offset of 2577841205 cycles Feb 13 19:18:39.906730 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:18:39.906738 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:18:39.906746 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:18:39.906754 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:18:39.906761 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:18:39.906771 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:18:39.906778 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:18:39.906786 kernel: Using GB pages for direct mapping Feb 13 19:18:39.906793 kernel: ACPI: Early table checksum verification disabled Feb 13 19:18:39.906800 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:18:39.906808 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:18:39.906815 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906823 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906830 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:18:39.906840 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906847 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906854 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906862 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:18:39.906869 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:18:39.906876 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:18:39.906884 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:18:39.906891 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:18:39.906899 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:18:39.906909 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:18:39.906916 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:18:39.907009 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:18:39.907017 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:18:39.907024 kernel: No NUMA configuration found Feb 13 19:18:39.907032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:18:39.907039 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:18:39.907046 kernel: Zone ranges: Feb 13 19:18:39.907054 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:18:39.907064 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:18:39.907071 kernel: Normal empty Feb 13 19:18:39.907078 kernel: Movable zone start for each node Feb 13 19:18:39.907086 kernel: Early memory node ranges Feb 13 19:18:39.907093 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:18:39.907100 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:18:39.907108 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:18:39.907115 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:18:39.907122 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:18:39.907132 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:18:39.907139 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:18:39.907146 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:18:39.907154 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:18:39.907161 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:18:39.907168 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:18:39.907183 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:18:39.907192 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:18:39.907200 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:18:39.907207 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:18:39.907215 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:18:39.907223 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:18:39.907230 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:18:39.907240 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:18:39.907248 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:18:39.907255 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:18:39.907263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:18:39.907270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:18:39.907280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:18:39.907288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:18:39.907295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:18:39.907303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:18:39.907311 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:18:39.907318 kernel: TSC deadline timer available Feb 13 19:18:39.907326 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:18:39.907334 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:18:39.907341 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:18:39.907351 kernel: kvm-guest: setup PV sched yield Feb 13 19:18:39.907358 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:18:39.907366 kernel: Booting paravirtualized kernel on KVM Feb 13 19:18:39.907374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:18:39.907382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:18:39.907389 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:18:39.907397 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:18:39.907404 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:18:39.907412 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:18:39.907422 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:18:39.907431 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:39.907439 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:18:39.907447 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:18:39.907454 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:18:39.907462 kernel: Fallback order for Node 0: 0 Feb 13 19:18:39.907470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:18:39.907477 kernel: Policy zone: DMA32 Feb 13 19:18:39.907487 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:18:39.907495 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Feb 13 19:18:39.907504 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:18:39.907511 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:18:39.907519 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:18:39.907526 kernel: Dynamic Preempt: voluntary Feb 13 19:18:39.907534 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:18:39.907542 kernel: rcu: RCU event tracing is enabled. Feb 13 19:18:39.907550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:18:39.907560 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:18:39.907568 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:18:39.907576 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:18:39.907583 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:18:39.907591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:18:39.907599 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:18:39.907606 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:18:39.907614 kernel: Console: colour dummy device 80x25 Feb 13 19:18:39.907622 kernel: printk: console [ttyS0] enabled Feb 13 19:18:39.907631 kernel: ACPI: Core revision 20230628 Feb 13 19:18:39.907639 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:18:39.907647 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:18:39.907654 kernel: x2apic enabled Feb 13 19:18:39.907662 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:18:39.907670 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:18:39.907685 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:18:39.907693 kernel: kvm-guest: setup PV IPIs Feb 13 19:18:39.907700 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:18:39.907710 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:18:39.907718 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:18:39.907726 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:18:39.907734 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:18:39.907742 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:18:39.907750 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:18:39.907757 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:18:39.907765 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:18:39.907772 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:18:39.907782 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:18:39.907790 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:18:39.907797 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:18:39.907805 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:18:39.907813 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:18:39.907821 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:18:39.907829 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:18:39.907836 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:18:39.907846 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:18:39.907854 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:18:39.907862 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:18:39.907869 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:18:39.907877 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:18:39.907885 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:18:39.907892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:18:39.907900 kernel: landlock: Up and running. Feb 13 19:18:39.907907 kernel: SELinux: Initializing. Feb 13 19:18:39.907915 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:18:39.907935 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:18:39.907943 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:18:39.907950 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:18:39.907958 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:18:39.907966 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:18:39.907974 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:18:39.907981 kernel: ... version: 0 Feb 13 19:18:39.907989 kernel: ... bit width: 48 Feb 13 19:18:39.907999 kernel: ... generic registers: 6 Feb 13 19:18:39.908007 kernel: ... value mask: 0000ffffffffffff Feb 13 19:18:39.908014 kernel: ... max period: 00007fffffffffff Feb 13 19:18:39.908022 kernel: ... fixed-purpose events: 0 Feb 13 19:18:39.908030 kernel: ... event mask: 000000000000003f Feb 13 19:18:39.908037 kernel: signal: max sigframe size: 1776 Feb 13 19:18:39.908045 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:18:39.908053 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:18:39.908060 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:18:39.908070 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:18:39.908078 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:18:39.908085 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:18:39.908093 kernel: smpboot: Max logical packages: 1 Feb 13 19:18:39.908101 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:18:39.908108 kernel: devtmpfs: initialized Feb 13 19:18:39.908116 kernel: x86/mm: Memory block size: 128MB Feb 13 19:18:39.908124 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:18:39.908132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:18:39.908139 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:18:39.908149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:18:39.908157 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:18:39.908165 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:18:39.908173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:18:39.908181 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:18:39.908188 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:18:39.908196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:18:39.908203 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:18:39.908213 kernel: audit: type=2000 audit(1739474319.996:1): state=initialized audit_enabled=0 res=1 Feb 13 19:18:39.908221 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:18:39.908228 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:18:39.908236 kernel: cpuidle: using governor menu Feb 13 19:18:39.908244 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:18:39.908251 kernel: dca service started, version 1.12.1 Feb 13 19:18:39.908259 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:18:39.908266 kernel: PCI: Using configuration type 1 for base access Feb 13 19:18:39.908274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:18:39.908284 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:18:39.908292 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:18:39.908299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:18:39.908307 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:18:39.908315 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:18:39.908322 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:18:39.908330 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:18:39.908337 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:18:39.908345 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:18:39.908355 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:18:39.908365 kernel: ACPI: Interpreter enabled Feb 13 19:18:39.908373 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:18:39.908382 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:18:39.908390 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:18:39.908398 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:18:39.908406 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:18:39.908413 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:18:39.908635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:18:39.908826 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:18:39.908967 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:18:39.908979 kernel: PCI host bridge to bus 0000:00 Feb 13 19:18:39.909106 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:18:39.909222 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:18:39.909337 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:18:39.909457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:18:39.909569 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:18:39.909710 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:18:39.909826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:18:39.910048 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:18:39.910206 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:18:39.910388 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:18:39.910523 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:18:39.910648 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:18:39.910780 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:18:39.910903 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:18:39.911064 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:18:39.911191 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:18:39.911321 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:18:39.911449 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:18:39.911580 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:18:39.911725 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:18:39.911850 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:18:39.912002 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:18:39.912140 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:18:39.912268 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:18:39.912394 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:18:39.912549 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:18:39.912692 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:18:39.912825 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:18:39.912964 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:18:39.913112 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:18:39.913244 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:18:39.913368 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:18:39.913502 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:18:39.913625 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:18:39.913635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:18:39.913643 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:18:39.913651 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:18:39.913662 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:18:39.913669 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:18:39.913677 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:18:39.913694 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:18:39.913702 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:18:39.913710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:18:39.913717 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:18:39.913725 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:18:39.913733 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:18:39.913744 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:18:39.913751 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:18:39.913759 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:18:39.913766 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:18:39.913774 kernel: iommu: Default domain type: Translated Feb 13 19:18:39.913782 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:18:39.913789 kernel: efivars: Registered efivars operations Feb 13 19:18:39.913797 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:18:39.913805 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:18:39.913812 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:18:39.913822 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:18:39.913830 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:18:39.913837 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:18:39.913845 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:18:39.913852 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:18:39.913860 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:18:39.913868 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:18:39.914009 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:18:39.914152 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:18:39.914277 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:18:39.914288 kernel: vgaarb: loaded Feb 13 19:18:39.914296 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:18:39.914303 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:18:39.914311 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:18:39.914319 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:18:39.914327 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:18:39.914334 kernel: pnp: PnP ACPI init Feb 13 19:18:39.914473 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:18:39.914484 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:18:39.914492 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:18:39.914500 kernel: NET: Registered PF_INET protocol family Feb 13 19:18:39.914524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:18:39.914534 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:18:39.914563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:18:39.914572 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:18:39.914590 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:18:39.914598 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:18:39.914606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:18:39.914614 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:18:39.914622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:18:39.914630 kernel: NET: Registered PF_XDP protocol family Feb 13 19:18:39.914785 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:18:39.914910 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:18:39.915181 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:18:39.915295 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:18:39.915412 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:18:39.915524 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:18:39.915636 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:18:39.915757 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:18:39.915768 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:18:39.915777 kernel: Initialise system trusted keyrings Feb 13 19:18:39.915789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:18:39.915797 kernel: Key type asymmetric registered Feb 13 19:18:39.915805 kernel: Asymmetric key parser 'x509' registered Feb 13 19:18:39.915813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:18:39.915820 kernel: io scheduler mq-deadline registered Feb 13 19:18:39.915828 kernel: io scheduler kyber registered Feb 13 19:18:39.915836 kernel: io scheduler bfq registered Feb 13 19:18:39.915844 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:18:39.915853 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:18:39.915864 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:18:39.915874 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:18:39.915882 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:18:39.915890 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:18:39.915898 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:18:39.915906 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:18:39.915917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:18:39.916056 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:18:39.916069 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:18:39.916182 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:18:39.916295 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:18:39 UTC (1739474319) Feb 13 19:18:39.916410 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:18:39.916420 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:18:39.916428 kernel: efifb: probing for efifb Feb 13 19:18:39.916440 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:18:39.916448 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:18:39.916456 kernel: efifb: scrolling: redraw Feb 13 19:18:39.916464 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:18:39.916472 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:18:39.916480 kernel: fb0: EFI VGA frame buffer device Feb 13 19:18:39.916488 kernel: pstore: Using crash dump compression: deflate Feb 13 19:18:39.916497 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:18:39.916504 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:18:39.916515 kernel: Segment Routing with IPv6 Feb 13 19:18:39.916523 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:18:39.916531 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:18:39.916539 kernel: Key type dns_resolver registered Feb 13 19:18:39.916547 kernel: IPI shorthand broadcast: enabled Feb 13 19:18:39.916554 kernel: sched_clock: Marking stable (613002809, 155630820)->(798834180, -30200551) Feb 13 19:18:39.916562 kernel: registered taskstats version 1 Feb 13 19:18:39.916573 kernel: Loading compiled-in X.509 certificates Feb 13 19:18:39.916590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:18:39.916608 kernel: Key type .fscrypt registered Feb 13 19:18:39.916618 kernel: Key type fscrypt-provisioning registered Feb 13 19:18:39.916626 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:18:39.916634 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:18:39.916642 kernel: ima: No architecture policies found Feb 13 19:18:39.916650 kernel: clk: Disabling unused clocks Feb 13 19:18:39.916658 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:18:39.916666 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:18:39.916674 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:18:39.916694 kernel: Run /init as init process Feb 13 19:18:39.916702 kernel: with arguments: Feb 13 19:18:39.916710 kernel: /init Feb 13 19:18:39.916718 kernel: with environment: Feb 13 19:18:39.916726 kernel: HOME=/ Feb 13 19:18:39.916734 kernel: TERM=linux Feb 13 19:18:39.916742 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:18:39.916751 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:18:39.916765 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:18:39.916774 systemd[1]: Detected virtualization kvm. Feb 13 19:18:39.916782 systemd[1]: Detected architecture x86-64. Feb 13 19:18:39.916790 systemd[1]: Running in initrd. Feb 13 19:18:39.916799 systemd[1]: No hostname configured, using default hostname. Feb 13 19:18:39.916807 systemd[1]: Hostname set to . Feb 13 19:18:39.916816 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:18:39.916824 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:18:39.916835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:39.916844 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:39.916853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:18:39.916862 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:39.916871 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:18:39.916880 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:18:39.916890 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:18:39.916901 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:18:39.916910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:39.916954 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:39.916964 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:39.916972 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:39.916981 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:39.916990 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:39.917009 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:39.917024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:39.917033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:18:39.917041 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:18:39.917050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:39.917058 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:39.917067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:39.917075 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:39.917084 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:18:39.917095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:39.917109 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:18:39.917121 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:18:39.917132 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:39.917143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:39.917154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:39.917166 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:39.917178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:39.917192 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:18:39.917204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:18:39.917245 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:18:39.917276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:39.917288 systemd-journald[194]: Journal started Feb 13 19:18:39.917312 systemd-journald[194]: Runtime Journal (/run/log/journal/21a329d8b27d41f6b65390c2b332b823) is 6M, max 48.2M, 42.2M free. Feb 13 19:18:39.910469 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:18:39.932191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:39.935939 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:39.936659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:39.940867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:39.945506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:18:39.945536 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:39.948950 kernel: Bridge firewalling registered Feb 13 19:18:39.948989 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:18:39.951081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:39.954209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:39.955154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:39.965829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:39.968803 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:39.971544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:39.982196 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:18:39.986581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:39.997185 dracut-cmdline[228]: dracut-dracut-053 Feb 13 19:18:40.000240 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:40.037440 systemd-resolved[231]: Positive Trust Anchors: Feb 13 19:18:40.037458 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:40.037488 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:40.040090 systemd-resolved[231]: Defaulting to hostname 'linux'. Feb 13 19:18:40.041206 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:40.047232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:40.101955 kernel: SCSI subsystem initialized Feb 13 19:18:40.111950 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:18:40.121953 kernel: iscsi: registered transport (tcp) Feb 13 19:18:40.146112 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:18:40.146142 kernel: QLogic iSCSI HBA Driver Feb 13 19:18:40.204174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:40.215375 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:18:40.241831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:18:40.241909 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:18:40.241937 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:18:40.284967 kernel: raid6: avx2x4 gen() 27815 MB/s Feb 13 19:18:40.301947 kernel: raid6: avx2x2 gen() 29881 MB/s Feb 13 19:18:40.319026 kernel: raid6: avx2x1 gen() 24711 MB/s Feb 13 19:18:40.319056 kernel: raid6: using algorithm avx2x2 gen() 29881 MB/s Feb 13 19:18:40.337145 kernel: raid6: .... xor() 18670 MB/s, rmw enabled Feb 13 19:18:40.337181 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:18:40.358970 kernel: xor: automatically using best checksumming function avx Feb 13 19:18:40.504954 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:18:40.517087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:40.524236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:40.540885 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:18:40.546329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:40.559078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:18:40.574400 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Feb 13 19:18:40.613157 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:40.622177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:40.689499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:40.702430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:18:40.712474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:40.714531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:40.717201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:40.718405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:40.726949 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:18:40.767483 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:18:40.771037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:18:40.771053 kernel: GPT:9289727 != 19775487 Feb 13 19:18:40.771064 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:18:40.771074 kernel: GPT:9289727 != 19775487 Feb 13 19:18:40.771084 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:18:40.771094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:40.771105 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:18:40.771124 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:18:40.771134 kernel: libata version 3.00 loaded. Feb 13 19:18:40.771145 kernel: AES CTR mode by8 optimization enabled Feb 13 19:18:40.734083 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:18:40.773068 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:18:40.794730 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:18:40.794747 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:18:40.794909 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:18:40.795076 kernel: scsi host0: ahci Feb 13 19:18:40.795236 kernel: scsi host1: ahci Feb 13 19:18:40.795382 kernel: scsi host2: ahci Feb 13 19:18:40.795525 kernel: scsi host3: ahci Feb 13 19:18:40.795679 kernel: scsi host4: ahci Feb 13 19:18:40.795826 kernel: scsi host5: ahci Feb 13 19:18:40.795994 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:18:40.796007 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:18:40.796018 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:18:40.796028 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:18:40.796039 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:18:40.796049 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 19:18:40.796059 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:18:40.796070 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (465) Feb 13 19:18:40.745979 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:40.819990 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:18:40.837170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:18:40.846775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:18:40.854776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:18:40.856028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:18:40.866066 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:18:40.867228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:40.867289 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:40.868071 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:40.868356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:40.868402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:40.870806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:40.873742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:40.889062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:40.892336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:40.910357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:41.034806 disk-uuid[553]: Primary Header is updated. Feb 13 19:18:41.034806 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:18:41.034806 disk-uuid[553]: Secondary Header is updated. Feb 13 19:18:41.038937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:41.043948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:41.104026 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:18:41.104087 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:18:41.107171 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:18:41.107244 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:18:41.107259 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:18:41.107275 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:18:41.108327 kernel: ata3.00: applying bridge limits Feb 13 19:18:41.108383 kernel: ata3.00: configured for UDMA/100 Feb 13 19:18:41.110939 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:18:41.112941 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:18:41.160946 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:18:41.175548 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:18:41.175564 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:18:42.059889 disk-uuid[568]: The operation has completed successfully. Feb 13 19:18:42.061134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:18:42.090711 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:18:42.090835 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:18:42.135137 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:18:42.138320 sh[596]: Success Feb 13 19:18:42.150953 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:18:42.187968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:18:42.201472 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:18:42.203808 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:18:42.218943 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:18:42.218980 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:42.218991 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:18:42.221271 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:18:42.221292 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:18:42.226113 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:18:42.228440 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:18:42.243108 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:18:42.244517 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:18:42.260208 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:42.260252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:42.260273 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:42.263946 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:42.272657 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:18:42.274345 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:42.284237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:18:42.293116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:18:42.344212 ignition[695]: Ignition 2.20.0 Feb 13 19:18:42.344223 ignition[695]: Stage: fetch-offline Feb 13 19:18:42.344256 ignition[695]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:42.344266 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:42.344378 ignition[695]: parsed url from cmdline: "" Feb 13 19:18:42.344382 ignition[695]: no config URL provided Feb 13 19:18:42.344387 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:18:42.344396 ignition[695]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:18:42.344424 ignition[695]: op(1): [started] loading QEMU firmware config module Feb 13 19:18:42.344429 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:18:42.354981 ignition[695]: op(1): [finished] loading QEMU firmware config module Feb 13 19:18:42.368651 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:42.384134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:42.399474 ignition[695]: parsing config with SHA512: 2754dd35307c5d31ceacd71003f9390994727e820aff8efb13383c2ad5a9d9ac9a25d417739f0dd0dbdf5052c9d21c1520c27793d13b7f390dd1aec024e53b84 Feb 13 19:18:42.404281 unknown[695]: fetched base config from "system" Feb 13 19:18:42.404295 unknown[695]: fetched user config from "qemu" Feb 13 19:18:42.404833 ignition[695]: fetch-offline: fetch-offline passed Feb 13 19:18:42.404944 ignition[695]: Ignition finished successfully Feb 13 19:18:42.407377 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:42.417068 systemd-networkd[785]: lo: Link UP Feb 13 19:18:42.417080 systemd-networkd[785]: lo: Gained carrier Feb 13 19:18:42.420238 systemd-networkd[785]: Enumeration completed Feb 13 19:18:42.420324 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:42.422058 systemd[1]: Reached target network.target - Network. Feb 13 19:18:42.422599 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:18:42.427405 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:42.427417 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:42.431485 systemd-networkd[785]: eth0: Link UP Feb 13 19:18:42.431496 systemd-networkd[785]: eth0: Gained carrier Feb 13 19:18:42.431504 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:42.433118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:18:42.449904 ignition[789]: Ignition 2.20.0 Feb 13 19:18:42.449914 ignition[789]: Stage: kargs Feb 13 19:18:42.450067 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:42.450078 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:42.450805 ignition[789]: kargs: kargs passed Feb 13 19:18:42.450843 ignition[789]: Ignition finished successfully Feb 13 19:18:42.457964 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:18:42.457989 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:18:42.468108 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:18:42.481448 ignition[797]: Ignition 2.20.0 Feb 13 19:18:42.481460 ignition[797]: Stage: disks Feb 13 19:18:42.481622 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:42.481635 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:42.482435 ignition[797]: disks: disks passed Feb 13 19:18:42.482478 ignition[797]: Ignition finished successfully Feb 13 19:18:42.486450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:18:42.489019 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:42.506033 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:18:42.508383 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:42.510369 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:42.512335 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:42.526052 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:18:42.540199 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:18:42.561332 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:18:43.231009 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:18:43.317954 kernel: EXT4-fs (vda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:18:43.318331 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:18:43.320603 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:43.340036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:43.343186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:18:43.345943 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:18:43.345998 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:18:43.354722 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Feb 13 19:18:43.354749 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:43.354761 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:43.354772 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:43.347783 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:43.356945 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:43.358674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:43.360478 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:18:43.363833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:18:43.398879 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:18:43.403831 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:18:43.408570 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:18:43.413391 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:18:43.500266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:43.512014 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:18:43.515389 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:18:43.519944 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:43.539410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:18:43.541357 ignition[928]: INFO : Ignition 2.20.0 Feb 13 19:18:43.541357 ignition[928]: INFO : Stage: mount Feb 13 19:18:43.541357 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:43.541357 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:43.546042 ignition[928]: INFO : mount: mount passed Feb 13 19:18:43.546042 ignition[928]: INFO : Ignition finished successfully Feb 13 19:18:43.544159 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:18:43.550042 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:18:43.776272 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 19:18:44.218169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:18:44.235080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:44.242505 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Feb 13 19:18:44.242534 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:44.242546 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:44.244020 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:18:44.246953 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:18:44.248035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:44.271313 ignition[961]: INFO : Ignition 2.20.0 Feb 13 19:18:44.271313 ignition[961]: INFO : Stage: files Feb 13 19:18:44.272998 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:44.272998 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:44.272998 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:18:44.276630 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:18:44.276630 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:18:44.279887 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:18:44.279887 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:18:44.283074 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:18:44.283074 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:18:44.283074 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:18:44.279908 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 19:18:44.321097 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:18:44.453052 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:18:44.453052 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:18:44.457385 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:18:44.992812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:18:45.243406 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:18:45.243406 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:18:45.246812 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:18:45.263042 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:18:45.267364 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:18:45.269085 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:18:45.269085 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:18:45.271846 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:18:45.273303 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:45.275245 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:45.277190 ignition[961]: INFO : files: files passed Feb 13 19:18:45.277941 ignition[961]: INFO : Ignition finished successfully Feb 13 19:18:45.280521 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:18:45.294043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:18:45.296865 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:18:45.299568 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:18:45.300582 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:18:45.305840 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:18:45.310000 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:45.310000 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:45.313184 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:45.316294 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:45.316817 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:18:45.329034 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:18:45.351147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:18:45.352209 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:18:45.354797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:18:45.356784 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:18:45.358765 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:18:45.376083 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:18:45.394945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:45.410028 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:18:45.420587 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:45.421836 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:45.424053 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:18:45.426054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:18:45.426166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:45.428244 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:18:45.430041 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:18:45.432064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:18:45.434008 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:45.435977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:45.438112 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:18:45.440187 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:45.442444 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:18:45.448932 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:18:45.451270 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:18:45.453268 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:18:45.453403 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:45.455543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:45.457135 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:45.459184 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:18:45.459303 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:45.461384 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:18:45.461492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:45.463710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:18:45.463819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:45.465851 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:18:45.467598 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:18:45.471000 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:45.471384 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:18:45.471721 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:18:45.472209 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:18:45.472300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:45.476939 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:18:45.477029 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:45.477416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:18:45.477532 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:45.481342 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:18:45.481454 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:18:45.498072 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:18:45.498514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:18:45.498654 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:45.501117 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:18:45.502360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:18:45.502518 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:45.504349 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:18:45.504448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:45.512200 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:18:45.512314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:18:45.525143 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:18:45.525143 ignition[1016]: INFO : Stage: umount Feb 13 19:18:45.526970 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:45.526970 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:18:45.529266 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:18:45.530731 ignition[1016]: INFO : umount: umount passed Feb 13 19:18:45.531645 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:18:45.534703 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:18:45.534828 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:18:45.535746 systemd[1]: Stopped target network.target - Network. Feb 13 19:18:45.538141 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:18:45.538198 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:18:45.538586 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:18:45.538632 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:18:45.538903 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:18:45.538964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:18:45.539389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:18:45.539431 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:45.539835 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:18:45.547337 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:18:45.560900 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:18:45.561057 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:18:45.565372 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:18:45.565586 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:18:45.565722 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:18:45.569817 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:18:45.570612 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:18:45.570659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:45.576079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:18:45.577051 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:18:45.577136 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:45.579299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:18:45.579353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:45.581414 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:18:45.581463 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:45.583777 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:18:45.583826 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:45.585140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:45.588242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:18:45.588322 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:18:45.598468 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:18:45.598619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:18:45.601683 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:18:45.601867 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:45.604077 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:18:45.604123 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:45.606123 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:18:45.606161 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:45.608085 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:18:45.608141 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:45.610322 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:18:45.610370 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:45.612255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:45.612302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:45.623060 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:18:45.625229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:18:45.625284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:45.627574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:45.627623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:45.630741 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:18:45.630803 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:18:45.631159 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:18:45.631262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:18:45.730239 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:18:45.730407 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:18:45.731079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:18:45.731281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:18:45.731337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:45.742136 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:18:45.751161 systemd[1]: Switching root. Feb 13 19:18:45.782111 systemd-journald[194]: Journal stopped Feb 13 19:18:47.008198 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:18:47.008261 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:18:47.008279 kernel: SELinux: policy capability open_perms=1 Feb 13 19:18:47.008296 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:18:47.008307 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:18:47.008319 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:18:47.008334 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:18:47.008345 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:18:47.008357 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:18:47.008372 kernel: audit: type=1403 audit(1739474326.231:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:18:47.008384 systemd[1]: Successfully loaded SELinux policy in 42.555ms. Feb 13 19:18:47.008404 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.655ms. Feb 13 19:18:47.008417 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:18:47.008429 systemd[1]: Detected virtualization kvm. Feb 13 19:18:47.008442 systemd[1]: Detected architecture x86-64. Feb 13 19:18:47.008456 systemd[1]: Detected first boot. Feb 13 19:18:47.008469 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:18:47.008481 zram_generator::config[1064]: No configuration found. Feb 13 19:18:47.008494 kernel: Guest personality initialized and is inactive Feb 13 19:18:47.008513 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:18:47.008524 kernel: Initialized host personality Feb 13 19:18:47.008538 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:18:47.008549 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:18:47.008563 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:18:47.008578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:18:47.008594 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:18:47.008606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:18:47.008619 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:18:47.008631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:18:47.008643 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:18:47.008655 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:18:47.008668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:18:47.008682 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:18:47.008695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:18:47.008707 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:18:47.008719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:47.008732 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:47.008746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:18:47.008759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:18:47.008773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:18:47.008786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:47.008800 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:18:47.008814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:47.008826 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:18:47.008839 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:18:47.008851 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:47.008863 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:18:47.008875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:47.008894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:47.008906 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:47.008918 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:47.008942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:18:47.008955 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:18:47.008967 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:18:47.008978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:47.008991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:47.009003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:47.009015 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:18:47.009030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:18:47.009042 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:18:47.009054 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:18:47.009066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:47.009078 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:18:47.009092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:18:47.009104 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:18:47.009117 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:18:47.009132 systemd[1]: Reached target machines.target - Containers. Feb 13 19:18:47.009144 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:18:47.009156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:47.009168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:47.009181 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:18:47.009193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:47.009205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:47.009217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:47.009229 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:18:47.009244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:47.009257 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:18:47.009269 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:18:47.009281 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:18:47.009293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:18:47.009305 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:18:47.009317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:47.009329 kernel: fuse: init (API version 7.39) Feb 13 19:18:47.009342 kernel: loop: module loaded Feb 13 19:18:47.009356 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:47.009367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:47.009380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:18:47.009392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:18:47.009404 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:18:47.009417 kernel: ACPI: bus type drm_connector registered Feb 13 19:18:47.009428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:47.009443 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:18:47.009455 systemd[1]: Stopped verity-setup.service. Feb 13 19:18:47.009468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:47.009505 systemd-journald[1139]: Collecting audit messages is disabled. Feb 13 19:18:47.009534 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:18:47.009547 systemd-journald[1139]: Journal started Feb 13 19:18:47.009570 systemd-journald[1139]: Runtime Journal (/run/log/journal/21a329d8b27d41f6b65390c2b332b823) is 6M, max 48.2M, 42.2M free. Feb 13 19:18:46.784896 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:18:46.798760 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:18:46.799226 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:18:47.013572 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:47.014309 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:18:47.015536 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:18:47.016622 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:18:47.017816 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:18:47.019031 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:18:47.020309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:18:47.021773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:47.023288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:18:47.023517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:18:47.024999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:47.025220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:47.026648 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:47.026871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:47.028477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:47.028707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:47.030213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:18:47.030429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:18:47.031822 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:47.032083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:47.033649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:47.035147 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:18:47.036733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:18:47.038481 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:18:47.053370 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:18:47.070015 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:18:47.073050 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:18:47.074202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:18:47.074235 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:47.076317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:18:47.078675 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:18:47.082711 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:18:47.083853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:47.087092 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:18:47.089152 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:18:47.090317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:47.095097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:18:47.098121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:47.099624 systemd-journald[1139]: Time spent on flushing to /var/log/journal/21a329d8b27d41f6b65390c2b332b823 is 25.122ms for 1050 entries. Feb 13 19:18:47.099624 systemd-journald[1139]: System Journal (/var/log/journal/21a329d8b27d41f6b65390c2b332b823) is 8M, max 195.6M, 187.6M free. Feb 13 19:18:47.133602 systemd-journald[1139]: Received client request to flush runtime journal. Feb 13 19:18:47.133641 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 19:18:47.100043 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:47.107113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:18:47.109546 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:18:47.112914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:47.115143 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:18:47.116572 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:18:47.118311 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:18:47.119947 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:18:47.125763 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:18:47.137080 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:18:47.142194 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:18:47.147655 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:18:47.151412 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:47.156938 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:18:47.162091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:18:47.162871 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:18:47.165970 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:18:47.168981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:18:47.176097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:47.189972 kernel: loop1: detected capacity change from 0 to 138176 Feb 13 19:18:47.196284 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 13 19:18:47.196339 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 13 19:18:47.202836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:47.228004 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 19:18:47.257827 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:18:47.269939 kernel: loop4: detected capacity change from 0 to 138176 Feb 13 19:18:47.283938 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 19:18:47.292999 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:18:47.293624 (sd-merge)[1209]: Merged extensions into '/usr'. Feb 13 19:18:47.298407 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:18:47.298511 systemd[1]: Reloading... Feb 13 19:18:47.348965 zram_generator::config[1234]: No configuration found. Feb 13 19:18:47.410386 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:18:47.491944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:47.557714 systemd[1]: Reloading finished in 258 ms. Feb 13 19:18:47.577894 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:18:47.601865 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:18:47.619378 systemd[1]: Starting ensure-sysext.service... Feb 13 19:18:47.621365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:47.631273 systemd[1]: Reload requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:18:47.631287 systemd[1]: Reloading... Feb 13 19:18:47.643204 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:18:47.643559 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:18:47.644581 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:18:47.644869 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Feb 13 19:18:47.645006 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Feb 13 19:18:47.648868 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:47.648880 systemd-tmpfiles[1275]: Skipping /boot Feb 13 19:18:47.662361 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:47.662375 systemd-tmpfiles[1275]: Skipping /boot Feb 13 19:18:47.686957 zram_generator::config[1305]: No configuration found. Feb 13 19:18:47.913605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:47.978947 systemd[1]: Reloading finished in 347 ms. Feb 13 19:18:48.007673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:48.016399 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:48.018747 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:18:48.021271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:18:48.025669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:48.028480 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:18:48.034541 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.034760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:48.037393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:48.040834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:48.046163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:48.047323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:48.047422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:48.052088 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:18:48.053321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.054792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:48.055019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:48.056722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:48.057038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:48.059253 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:48.059535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:48.061294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:18:48.075151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:18:48.081051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.081244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:48.092206 augenrules[1376]: No rules Feb 13 19:18:48.094220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:48.096788 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:48.102133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:48.119666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:48.120036 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:48.120265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.123768 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:18:48.125504 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:18:48.127242 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:48.127500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:48.129082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:18:48.130973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:48.131189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:48.132769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:48.133006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:48.134860 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:48.135092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:48.145728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.154131 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:48.155204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:48.156197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:48.161059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:48.165193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:48.168161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:48.169311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:48.169349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:48.172419 augenrules[1394]: /sbin/augenrules: No change Feb 13 19:18:48.171192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:48.175091 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:18:48.176294 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:18:48.176372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:48.177455 systemd[1]: Finished ensure-sysext.service. Feb 13 19:18:48.178901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:48.179278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:48.181335 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:48.181576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:48.183293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:48.183521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:48.185491 augenrules[1417]: No rules Feb 13 19:18:48.185330 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:48.185725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:48.187284 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:48.187573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:48.188346 systemd-resolved[1345]: Positive Trust Anchors: Feb 13 19:18:48.188362 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:48.188394 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:48.190190 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:18:48.192679 systemd-resolved[1345]: Defaulting to hostname 'linux'. Feb 13 19:18:48.194630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:48.198440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:48.199684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:48.199751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:48.206070 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:18:48.211983 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Feb 13 19:18:48.231129 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:48.242057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:48.270564 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:18:48.293991 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1442) Feb 13 19:18:48.298487 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:18:48.299944 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:18:48.319356 systemd-networkd[1438]: lo: Link UP Feb 13 19:18:48.319632 systemd-networkd[1438]: lo: Gained carrier Feb 13 19:18:48.321866 systemd-networkd[1438]: Enumeration completed Feb 13 19:18:48.322013 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:48.328671 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:48.328735 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:48.330090 systemd-networkd[1438]: eth0: Link UP Feb 13 19:18:48.330144 systemd-networkd[1438]: eth0: Gained carrier Feb 13 19:18:48.330192 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:48.331171 systemd[1]: Reached target network.target - Network. Feb 13 19:18:48.338117 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:18:48.340612 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:18:48.341274 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:18:48.343255 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Feb 13 19:18:49.501114 systemd-resolved[1345]: Clock change detected. Flushing caches. Feb 13 19:18:49.501332 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:18:49.501380 systemd-timesyncd[1429]: Initial clock synchronization to Thu 2025-02-13 19:18:49.501008 UTC. Feb 13 19:18:49.502787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:18:49.511979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:18:49.514108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:18:49.518058 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:18:49.520873 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:18:49.530016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:18:49.547192 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:18:49.551460 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:18:49.554699 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:18:49.554877 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:18:49.555109 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:18:49.574955 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:18:49.578702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:49.631228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:49.665279 kernel: kvm_amd: TSC scaling supported Feb 13 19:18:49.665314 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:18:49.665350 kernel: kvm_amd: Nested Paging enabled Feb 13 19:18:49.666256 kernel: kvm_amd: LBR virtualization supported Feb 13 19:18:49.666271 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:18:49.667246 kernel: kvm_amd: Virtual GIF supported Feb 13 19:18:49.689921 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:18:49.719320 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:18:49.746061 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:18:49.753948 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:49.782158 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:18:49.783688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:49.784824 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:49.786004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:18:49.787288 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:18:49.788729 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:18:49.789920 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:18:49.791345 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:18:49.792596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:18:49.792625 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:49.793557 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:49.795215 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:18:49.798080 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:18:49.801484 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:18:49.802891 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:18:49.804195 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:18:49.811306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:18:49.812746 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:18:49.815297 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:18:49.816914 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:18:49.818081 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:49.819078 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:49.820071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:49.820101 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:49.821075 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:18:49.823210 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:18:49.826030 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:49.826813 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:18:49.830264 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:18:49.832126 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:18:49.834288 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:18:49.836437 jq[1486]: false Feb 13 19:18:49.839141 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:18:49.844863 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:18:49.848318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:18:49.851408 dbus-daemon[1485]: [system] SELinux support is enabled Feb 13 19:18:49.851724 extend-filesystems[1487]: Found loop3 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found loop4 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found loop5 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found sr0 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found vda Feb 13 19:18:49.851724 extend-filesystems[1487]: Found vda1 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found vda2 Feb 13 19:18:49.851724 extend-filesystems[1487]: Found vda3 Feb 13 19:18:49.880061 extend-filesystems[1487]: Found usr Feb 13 19:18:49.880061 extend-filesystems[1487]: Found vda4 Feb 13 19:18:49.880061 extend-filesystems[1487]: Found vda6 Feb 13 19:18:49.880061 extend-filesystems[1487]: Found vda7 Feb 13 19:18:49.880061 extend-filesystems[1487]: Found vda9 Feb 13 19:18:49.880061 extend-filesystems[1487]: Checking size of /dev/vda9 Feb 13 19:18:49.854714 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:18:49.874164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:18:49.874628 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:18:49.877354 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:18:49.880035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:18:49.891472 jq[1502]: true Feb 13 19:18:49.881364 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:18:49.885430 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:18:49.892550 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:18:49.892807 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:18:49.893152 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:18:49.893381 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:18:49.896456 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:18:49.897807 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:18:49.911415 update_engine[1501]: I20250213 19:18:49.911093 1501 main.cc:92] Flatcar Update Engine starting Feb 13 19:18:49.912005 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:18:49.912747 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:18:49.927638 update_engine[1501]: I20250213 19:18:49.916079 1501 update_check_scheduler.cc:74] Next update check in 4m52s Feb 13 19:18:49.927679 jq[1508]: true Feb 13 19:18:49.912786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:18:49.928302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:18:49.928328 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:18:49.931607 extend-filesystems[1487]: Resized partition /dev/vda9 Feb 13 19:18:49.936349 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:18:49.943022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1449) Feb 13 19:18:49.943048 tar[1507]: linux-amd64/LICENSE Feb 13 19:18:49.943048 tar[1507]: linux-amd64/helm Feb 13 19:18:49.943594 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:18:49.957098 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:18:49.986553 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:18:50.008582 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:18:50.016069 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:18:50.023922 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:18:50.024236 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:18:50.064414 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:18:50.067129 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:18:50.243145 systemd-logind[1499]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:18:50.243177 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:18:50.246684 systemd-logind[1499]: New seat seat0. Feb 13 19:18:50.249383 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:18:50.255472 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:18:50.263189 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:18:50.265503 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:18:50.266793 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:18:50.370662 tar[1507]: linux-amd64/README.md Feb 13 19:18:50.378865 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:18:50.381260 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:48770.service - OpenSSH per-connection server daemon (10.0.0.1:48770). Feb 13 19:18:50.388757 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:18:50.458696 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:18:50.654963 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:18:51.056843 sshd[1559]: Connection closed by authenticating user core 10.0.0.1 port 48770 [preauth] Feb 13 19:18:50.735615 systemd[1]: sshd@0-10.0.0.49:22-10.0.0.1:48770.service: Deactivated successfully. Feb 13 19:18:51.057491 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:18:51.057491 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:18:51.057491 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:18:51.062996 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Feb 13 19:18:51.059687 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:18:51.064106 containerd[1509]: time="2025-02-13T19:18:51.057697393Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:18:51.059990 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:18:51.065686 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:18:51.067614 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:18:51.069771 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:18:51.079227 containerd[1509]: time="2025-02-13T19:18:51.079179409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081024 containerd[1509]: time="2025-02-13T19:18:51.080953076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081024 containerd[1509]: time="2025-02-13T19:18:51.081011585Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:18:51.081072 containerd[1509]: time="2025-02-13T19:18:51.081027405Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:18:51.081264 containerd[1509]: time="2025-02-13T19:18:51.081232480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:18:51.081264 containerd[1509]: time="2025-02-13T19:18:51.081256986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081358 containerd[1509]: time="2025-02-13T19:18:51.081329702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081358 containerd[1509]: time="2025-02-13T19:18:51.081346353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081624 containerd[1509]: time="2025-02-13T19:18:51.081593807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081624 containerd[1509]: time="2025-02-13T19:18:51.081612532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081671 containerd[1509]: time="2025-02-13T19:18:51.081626469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081671 containerd[1509]: time="2025-02-13T19:18:51.081635936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.081767 containerd[1509]: time="2025-02-13T19:18:51.081735874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.082030 containerd[1509]: time="2025-02-13T19:18:51.081999889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:51.082189 containerd[1509]: time="2025-02-13T19:18:51.082161522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:51.082189 containerd[1509]: time="2025-02-13T19:18:51.082177973Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:18:51.082304 containerd[1509]: time="2025-02-13T19:18:51.082277960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:18:51.082357 containerd[1509]: time="2025-02-13T19:18:51.082338193Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:18:51.174686 containerd[1509]: time="2025-02-13T19:18:51.174626196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:18:51.174727 containerd[1509]: time="2025-02-13T19:18:51.174713851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:18:51.174764 containerd[1509]: time="2025-02-13T19:18:51.174745129Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:18:51.174785 containerd[1509]: time="2025-02-13T19:18:51.174768203Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:18:51.174816 containerd[1509]: time="2025-02-13T19:18:51.174789482Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:18:51.175045 containerd[1509]: time="2025-02-13T19:18:51.175016338Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:18:51.175352 containerd[1509]: time="2025-02-13T19:18:51.175321711Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:18:51.175511 containerd[1509]: time="2025-02-13T19:18:51.175488163Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:18:51.175535 containerd[1509]: time="2025-02-13T19:18:51.175510204Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:18:51.175535 containerd[1509]: time="2025-02-13T19:18:51.175529721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:18:51.175687 containerd[1509]: time="2025-02-13T19:18:51.175650116Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175722 containerd[1509]: time="2025-02-13T19:18:51.175685763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175722 containerd[1509]: time="2025-02-13T19:18:51.175707364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175773 containerd[1509]: time="2025-02-13T19:18:51.175731038Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175773 containerd[1509]: time="2025-02-13T19:18:51.175757187Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175815 containerd[1509]: time="2025-02-13T19:18:51.175780060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175838 containerd[1509]: time="2025-02-13T19:18:51.175810307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175865 containerd[1509]: time="2025-02-13T19:18:51.175833751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:18:51.175888 containerd[1509]: time="2025-02-13T19:18:51.175865370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.175988231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176040679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176071797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176097926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176122592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176145766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176163239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176192143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176223672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176251063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176275840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176314 containerd[1509]: time="2025-02-13T19:18:51.176299705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176591 containerd[1509]: time="2025-02-13T19:18:51.176386017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:18:51.176591 containerd[1509]: time="2025-02-13T19:18:51.176471878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176591 containerd[1509]: time="2025-02-13T19:18:51.176510330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176591 containerd[1509]: time="2025-02-13T19:18:51.176527522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:18:51.176591 containerd[1509]: time="2025-02-13T19:18:51.176589729Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:18:51.176681 containerd[1509]: time="2025-02-13T19:18:51.176611008Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:18:51.176681 containerd[1509]: time="2025-02-13T19:18:51.176629052Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:18:51.176718 containerd[1509]: time="2025-02-13T19:18:51.176707129Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:18:51.176737 containerd[1509]: time="2025-02-13T19:18:51.176721135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.176761 containerd[1509]: time="2025-02-13T19:18:51.176734109Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:18:51.176761 containerd[1509]: time="2025-02-13T19:18:51.176747174Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:18:51.176797 containerd[1509]: time="2025-02-13T19:18:51.176759888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:18:51.177205 containerd[1509]: time="2025-02-13T19:18:51.177154528Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:18:51.177337 containerd[1509]: time="2025-02-13T19:18:51.177207898Z" level=info msg="Connect containerd service" Feb 13 19:18:51.177337 containerd[1509]: time="2025-02-13T19:18:51.177236652Z" level=info msg="using legacy CRI server" Feb 13 19:18:51.177337 containerd[1509]: time="2025-02-13T19:18:51.177243795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:18:51.177399 containerd[1509]: time="2025-02-13T19:18:51.177351778Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:18:51.177953 containerd[1509]: time="2025-02-13T19:18:51.177919643Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:51.178133 containerd[1509]: time="2025-02-13T19:18:51.178089261Z" level=info msg="Start subscribing containerd event" Feb 13 19:18:51.178189 containerd[1509]: time="2025-02-13T19:18:51.178155415Z" level=info msg="Start recovering state" Feb 13 19:18:51.178213 containerd[1509]: time="2025-02-13T19:18:51.178194919Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:18:51.178253 containerd[1509]: time="2025-02-13T19:18:51.178244873Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:18:51.178274 containerd[1509]: time="2025-02-13T19:18:51.178260612Z" level=info msg="Start event monitor" Feb 13 19:18:51.178295 containerd[1509]: time="2025-02-13T19:18:51.178283926Z" level=info msg="Start snapshots syncer" Feb 13 19:18:51.178315 containerd[1509]: time="2025-02-13T19:18:51.178296980Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:18:51.178315 containerd[1509]: time="2025-02-13T19:18:51.178304815Z" level=info msg="Start streaming server" Feb 13 19:18:51.178413 containerd[1509]: time="2025-02-13T19:18:51.178395174Z" level=info msg="containerd successfully booted in 0.338295s" Feb 13 19:18:51.178534 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:18:51.204137 systemd-networkd[1438]: eth0: Gained IPv6LL Feb 13 19:18:51.207094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:18:51.222817 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:18:51.231145 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:18:51.233664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:51.235767 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:18:51.253651 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:18:51.254034 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:18:51.256112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:18:51.258518 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:18:51.914600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:51.916556 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:18:51.918988 systemd[1]: Startup finished in 765ms (kernel) + 6.508s (initrd) + 4.571s (userspace) = 11.845s. Feb 13 19:18:51.925347 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:52.324182 kubelet[1603]: E0213 19:18:52.324055 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:52.327155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:52.327362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:52.327729 systemd[1]: kubelet.service: Consumed 961ms CPU time, 252.9M memory peak. Feb 13 19:19:00.744886 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Feb 13 19:19:00.786635 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:00.788657 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:00.799030 systemd-logind[1499]: New session 1 of user core. Feb 13 19:19:00.800331 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:19:00.810159 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:19:00.820365 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:19:00.823227 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:19:00.830209 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:19:00.832399 systemd-logind[1499]: New session c1 of user core. Feb 13 19:19:00.979282 systemd[1620]: Queued start job for default target default.target. Feb 13 19:19:00.991233 systemd[1620]: Created slice app.slice - User Application Slice. Feb 13 19:19:00.991259 systemd[1620]: Reached target paths.target - Paths. Feb 13 19:19:00.991301 systemd[1620]: Reached target timers.target - Timers. Feb 13 19:19:00.992903 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:19:01.003595 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:19:01.003730 systemd[1620]: Reached target sockets.target - Sockets. Feb 13 19:19:01.003776 systemd[1620]: Reached target basic.target - Basic System. Feb 13 19:19:01.003819 systemd[1620]: Reached target default.target - Main User Target. Feb 13 19:19:01.003854 systemd[1620]: Startup finished in 165ms. Feb 13 19:19:01.004274 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:19:01.005811 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:19:01.070313 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:36600.service - OpenSSH per-connection server daemon (10.0.0.1:36600). Feb 13 19:19:01.113889 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 36600 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.115525 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.119664 systemd-logind[1499]: New session 2 of user core. Feb 13 19:19:01.133064 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:19:01.187005 sshd[1633]: Connection closed by 10.0.0.1 port 36600 Feb 13 19:19:01.187361 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:01.198838 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:36600.service: Deactivated successfully. Feb 13 19:19:01.200708 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:19:01.202487 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:19:01.203827 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:36602.service - OpenSSH per-connection server daemon (10.0.0.1:36602). Feb 13 19:19:01.204775 systemd-logind[1499]: Removed session 2. Feb 13 19:19:01.245235 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 36602 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.246826 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.250881 systemd-logind[1499]: New session 3 of user core. Feb 13 19:19:01.260040 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:19:01.309105 sshd[1641]: Connection closed by 10.0.0.1 port 36602 Feb 13 19:19:01.309443 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:01.322735 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:36602.service: Deactivated successfully. Feb 13 19:19:01.324535 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:19:01.326196 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:19:01.338300 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:36614.service - OpenSSH per-connection server daemon (10.0.0.1:36614). Feb 13 19:19:01.339471 systemd-logind[1499]: Removed session 3. Feb 13 19:19:01.374662 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.376266 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.380681 systemd-logind[1499]: New session 4 of user core. Feb 13 19:19:01.390058 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:19:01.442732 sshd[1649]: Connection closed by 10.0.0.1 port 36614 Feb 13 19:19:01.443218 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:01.460766 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:36614.service: Deactivated successfully. Feb 13 19:19:01.462616 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:19:01.464028 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:19:01.474271 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:36616.service - OpenSSH per-connection server daemon (10.0.0.1:36616). Feb 13 19:19:01.475265 systemd-logind[1499]: Removed session 4. Feb 13 19:19:01.513379 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 36616 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.515023 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.519300 systemd-logind[1499]: New session 5 of user core. Feb 13 19:19:01.529089 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:19:01.587783 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:19:01.588216 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:01.605105 sudo[1658]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:01.606597 sshd[1657]: Connection closed by 10.0.0.1 port 36616 Feb 13 19:19:01.607039 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:01.619461 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:36616.service: Deactivated successfully. Feb 13 19:19:01.621174 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:19:01.622984 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:19:01.638352 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:36624.service - OpenSSH per-connection server daemon (10.0.0.1:36624). Feb 13 19:19:01.639530 systemd-logind[1499]: Removed session 5. Feb 13 19:19:01.676849 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 36624 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.678367 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.682622 systemd-logind[1499]: New session 6 of user core. Feb 13 19:19:01.692042 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:19:01.744732 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:19:01.745072 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:01.748750 sudo[1668]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:01.755015 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:19:01.755340 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:01.775317 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:19:01.803561 augenrules[1690]: No rules Feb 13 19:19:01.805288 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:19:01.805609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:19:01.806723 sudo[1667]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:01.808143 sshd[1666]: Connection closed by 10.0.0.1 port 36624 Feb 13 19:19:01.808465 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:01.816613 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:36624.service: Deactivated successfully. Feb 13 19:19:01.818414 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:19:01.820046 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:19:01.830208 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:36636.service - OpenSSH per-connection server daemon (10.0.0.1:36636). Feb 13 19:19:01.831257 systemd-logind[1499]: Removed session 6. Feb 13 19:19:01.866373 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 36636 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:01.867838 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:01.872054 systemd-logind[1499]: New session 7 of user core. Feb 13 19:19:01.883069 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:19:01.935543 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:19:01.935875 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:02.214169 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:19:02.214314 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:19:02.442463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:19:02.450187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:02.478091 dockerd[1722]: time="2025-02-13T19:19:02.477952748Z" level=info msg="Starting up" Feb 13 19:19:02.673089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:02.678123 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:19:02.728191 kubelet[1754]: E0213 19:19:02.728021 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:19:02.734958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:19:02.735199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:19:02.735645 systemd[1]: kubelet.service: Consumed 217ms CPU time, 106.6M memory peak. Feb 13 19:19:02.927411 dockerd[1722]: time="2025-02-13T19:19:02.927357422Z" level=info msg="Loading containers: start." Feb 13 19:19:03.093957 kernel: Initializing XFRM netlink socket Feb 13 19:19:03.173563 systemd-networkd[1438]: docker0: Link UP Feb 13 19:19:03.218337 dockerd[1722]: time="2025-02-13T19:19:03.218283763Z" level=info msg="Loading containers: done." Feb 13 19:19:03.231377 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3939600319-merged.mount: Deactivated successfully. Feb 13 19:19:03.233230 dockerd[1722]: time="2025-02-13T19:19:03.233187212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:19:03.233318 dockerd[1722]: time="2025-02-13T19:19:03.233292059Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:19:03.233445 dockerd[1722]: time="2025-02-13T19:19:03.233422884Z" level=info msg="Daemon has completed initialization" Feb 13 19:19:03.266961 dockerd[1722]: time="2025-02-13T19:19:03.266900166Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:19:03.267114 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:19:03.754655 containerd[1509]: time="2025-02-13T19:19:03.754622029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:19:04.288330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059307454.mount: Deactivated successfully. Feb 13 19:19:05.515267 containerd[1509]: time="2025-02-13T19:19:05.515205441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.515918 containerd[1509]: time="2025-02-13T19:19:05.515851122Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:19:05.517129 containerd[1509]: time="2025-02-13T19:19:05.517091729Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.519697 containerd[1509]: time="2025-02-13T19:19:05.519638214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.520695 containerd[1509]: time="2025-02-13T19:19:05.520653659Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.765998086s" Feb 13 19:19:05.520734 containerd[1509]: time="2025-02-13T19:19:05.520695808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:19:05.521216 containerd[1509]: time="2025-02-13T19:19:05.521195725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:19:07.081817 containerd[1509]: time="2025-02-13T19:19:07.081760802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:07.082611 containerd[1509]: time="2025-02-13T19:19:07.082539993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:19:07.083647 containerd[1509]: time="2025-02-13T19:19:07.083617504Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:07.086292 containerd[1509]: time="2025-02-13T19:19:07.086264117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:07.087312 containerd[1509]: time="2025-02-13T19:19:07.087273680Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.566051215s" Feb 13 19:19:07.087379 containerd[1509]: time="2025-02-13T19:19:07.087316651Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:19:07.088030 containerd[1509]: time="2025-02-13T19:19:07.087847456Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:19:08.684681 containerd[1509]: time="2025-02-13T19:19:08.684619895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:08.718965 containerd[1509]: time="2025-02-13T19:19:08.718892328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:19:08.735111 containerd[1509]: time="2025-02-13T19:19:08.735054538Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:08.797432 containerd[1509]: time="2025-02-13T19:19:08.797382264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:08.798415 containerd[1509]: time="2025-02-13T19:19:08.798367181Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.710489178s" Feb 13 19:19:08.798469 containerd[1509]: time="2025-02-13T19:19:08.798417586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:19:08.798966 containerd[1509]: time="2025-02-13T19:19:08.798945386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:19:10.647889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311867814.mount: Deactivated successfully. Feb 13 19:19:11.450495 containerd[1509]: time="2025-02-13T19:19:11.450435005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:11.451291 containerd[1509]: time="2025-02-13T19:19:11.451253881Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:19:11.452330 containerd[1509]: time="2025-02-13T19:19:11.452305593Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:11.454311 containerd[1509]: time="2025-02-13T19:19:11.454282220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:11.454839 containerd[1509]: time="2025-02-13T19:19:11.454801153Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.65583041s" Feb 13 19:19:11.454839 containerd[1509]: time="2025-02-13T19:19:11.454826260Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:19:11.455340 containerd[1509]: time="2025-02-13T19:19:11.455308946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:19:12.526362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115646634.mount: Deactivated successfully. Feb 13 19:19:12.942499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:19:12.957087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:13.124651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:13.129086 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:19:13.428540 kubelet[2056]: E0213 19:19:13.428331 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:19:13.433046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:19:13.433255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:19:13.433623 systemd[1]: kubelet.service: Consumed 361ms CPU time, 103.9M memory peak. Feb 13 19:19:13.759108 containerd[1509]: time="2025-02-13T19:19:13.759050814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:13.759803 containerd[1509]: time="2025-02-13T19:19:13.759729817Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:19:13.761017 containerd[1509]: time="2025-02-13T19:19:13.760993377Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:13.763917 containerd[1509]: time="2025-02-13T19:19:13.763865132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:13.765156 containerd[1509]: time="2025-02-13T19:19:13.765127420Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.309786164s" Feb 13 19:19:13.765220 containerd[1509]: time="2025-02-13T19:19:13.765155943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:19:13.765774 containerd[1509]: time="2025-02-13T19:19:13.765620845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:19:14.241894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323501455.mount: Deactivated successfully. Feb 13 19:19:14.248553 containerd[1509]: time="2025-02-13T19:19:14.248504885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:14.250143 containerd[1509]: time="2025-02-13T19:19:14.250082584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:19:14.251395 containerd[1509]: time="2025-02-13T19:19:14.251349430Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:14.253789 containerd[1509]: time="2025-02-13T19:19:14.253738841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:14.254852 containerd[1509]: time="2025-02-13T19:19:14.254797556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.13386ms" Feb 13 19:19:14.254852 containerd[1509]: time="2025-02-13T19:19:14.254846327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:19:14.255389 containerd[1509]: time="2025-02-13T19:19:14.255351735Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:19:14.792102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280726007.mount: Deactivated successfully. Feb 13 19:19:16.820821 containerd[1509]: time="2025-02-13T19:19:16.820733732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.821996 containerd[1509]: time="2025-02-13T19:19:16.821909988Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:19:16.823111 containerd[1509]: time="2025-02-13T19:19:16.823075825Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.826036 containerd[1509]: time="2025-02-13T19:19:16.825990390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.827321 containerd[1509]: time="2025-02-13T19:19:16.827278165Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.571897166s" Feb 13 19:19:16.827321 containerd[1509]: time="2025-02-13T19:19:16.827310987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:19:18.872534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:18.872822 systemd[1]: kubelet.service: Consumed 361ms CPU time, 103.9M memory peak. Feb 13 19:19:18.891136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:18.916194 systemd[1]: Reload requested from client PID 2165 ('systemctl') (unit session-7.scope)... Feb 13 19:19:18.916210 systemd[1]: Reloading... Feb 13 19:19:19.009006 zram_generator::config[2209]: No configuration found. Feb 13 19:19:19.306183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:19:19.408764 systemd[1]: Reloading finished in 492 ms. Feb 13 19:19:19.452892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:19.457134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:19.458088 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:19:19.458338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:19.458375 systemd[1]: kubelet.service: Consumed 143ms CPU time, 91.9M memory peak. Feb 13 19:19:19.459847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:19.617648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:19.622574 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:19.659236 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:19.659236 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:19:19.659236 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:19.659682 kubelet[2259]: I0213 19:19:19.659290 2259 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:19:20.176267 kubelet[2259]: I0213 19:19:20.176219 2259 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:19:20.176267 kubelet[2259]: I0213 19:19:20.176254 2259 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:19:20.176558 kubelet[2259]: I0213 19:19:20.176535 2259 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:19:20.198703 kubelet[2259]: I0213 19:19:20.198652 2259 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:19:20.199265 kubelet[2259]: E0213 19:19:20.199232 2259 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:20.205491 kubelet[2259]: E0213 19:19:20.205453 2259 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:19:20.205491 kubelet[2259]: I0213 19:19:20.205479 2259 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:19:20.210946 kubelet[2259]: I0213 19:19:20.210904 2259 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:19:20.212445 kubelet[2259]: I0213 19:19:20.212192 2259 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:19:20.212628 kubelet[2259]: I0213 19:19:20.212437 2259 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:19:20.212628 kubelet[2259]: I0213 19:19:20.212621 2259 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:19:20.212761 kubelet[2259]: I0213 19:19:20.212633 2259 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:19:20.212830 kubelet[2259]: I0213 19:19:20.212804 2259 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:20.215616 kubelet[2259]: I0213 19:19:20.215587 2259 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:19:20.215616 kubelet[2259]: I0213 19:19:20.215606 2259 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:19:20.215670 kubelet[2259]: I0213 19:19:20.215629 2259 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:19:20.215670 kubelet[2259]: I0213 19:19:20.215640 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:19:20.219031 kubelet[2259]: I0213 19:19:20.218976 2259 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:19:20.219438 kubelet[2259]: W0213 19:19:20.219152 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:20.219438 kubelet[2259]: E0213 19:19:20.219206 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:20.219438 kubelet[2259]: I0213 19:19:20.219331 2259 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:19:20.219563 kubelet[2259]: W0213 19:19:20.219522 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:20.219563 kubelet[2259]: E0213 19:19:20.219556 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:20.219821 kubelet[2259]: W0213 19:19:20.219802 2259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:19:20.221452 kubelet[2259]: I0213 19:19:20.221425 2259 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:19:20.221502 kubelet[2259]: I0213 19:19:20.221461 2259 server.go:1287] "Started kubelet" Feb 13 19:19:20.223223 kubelet[2259]: I0213 19:19:20.222307 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:19:20.223223 kubelet[2259]: I0213 19:19:20.222723 2259 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:19:20.223223 kubelet[2259]: I0213 19:19:20.222793 2259 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:19:20.224120 kubelet[2259]: I0213 19:19:20.223709 2259 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:19:20.226748 kubelet[2259]: I0213 19:19:20.224247 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:19:20.226748 kubelet[2259]: I0213 19:19:20.224400 2259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:19:20.226748 kubelet[2259]: E0213 19:19:20.224522 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:20.226748 kubelet[2259]: I0213 19:19:20.224546 2259 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:19:20.226748 kubelet[2259]: I0213 19:19:20.224717 2259 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:19:20.226748 kubelet[2259]: I0213 19:19:20.224757 2259 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:19:20.226748 kubelet[2259]: W0213 19:19:20.225040 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:20.226748 kubelet[2259]: E0213 19:19:20.225073 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:20.226748 kubelet[2259]: E0213 19:19:20.225629 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Feb 13 19:19:20.227000 kubelet[2259]: E0213 19:19:20.225075 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dab6e8e62093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:19:20.221442195 +0000 UTC m=+0.594924137,LastTimestamp:2025-02-13 19:19:20.221442195 +0000 UTC m=+0.594924137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:19:20.227000 kubelet[2259]: E0213 19:19:20.226855 2259 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:19:20.227235 kubelet[2259]: I0213 19:19:20.227179 2259 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:19:20.227235 kubelet[2259]: I0213 19:19:20.227232 2259 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:19:20.227329 kubelet[2259]: I0213 19:19:20.227312 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:19:20.240676 kubelet[2259]: I0213 19:19:20.240617 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241738 2259 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241756 2259 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241771 2259 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241849 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241863 2259 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241883 2259 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:19:20.242057 kubelet[2259]: I0213 19:19:20.241890 2259 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:19:20.242057 kubelet[2259]: E0213 19:19:20.241950 2259 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:19:20.243313 kubelet[2259]: W0213 19:19:20.242425 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:20.243313 kubelet[2259]: E0213 19:19:20.242460 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:20.245557 kubelet[2259]: I0213 19:19:20.245533 2259 policy_none.go:49] "None policy: Start" Feb 13 19:19:20.245557 kubelet[2259]: I0213 19:19:20.245556 2259 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:19:20.245622 kubelet[2259]: I0213 19:19:20.245568 2259 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:19:20.252960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:19:20.269953 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:19:20.272824 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:19:20.282806 kubelet[2259]: I0213 19:19:20.282775 2259 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:19:20.283025 kubelet[2259]: I0213 19:19:20.283003 2259 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:19:20.283067 kubelet[2259]: I0213 19:19:20.283019 2259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:19:20.283221 kubelet[2259]: I0213 19:19:20.283197 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:19:20.283904 kubelet[2259]: E0213 19:19:20.283850 2259 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:19:20.283904 kubelet[2259]: E0213 19:19:20.283889 2259 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:19:20.350221 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:19:20.360828 kubelet[2259]: E0213 19:19:20.360790 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:20.363076 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:19:20.372203 kubelet[2259]: E0213 19:19:20.372174 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:20.374838 systemd[1]: Created slice kubepods-burstable-podf1f5da22dfe533a022166d9acb35e974.slice - libcontainer container kubepods-burstable-podf1f5da22dfe533a022166d9acb35e974.slice. Feb 13 19:19:20.376377 kubelet[2259]: E0213 19:19:20.376341 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:20.384131 kubelet[2259]: I0213 19:19:20.384110 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:19:20.384538 kubelet[2259]: E0213 19:19:20.384500 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 13 19:19:20.425991 kubelet[2259]: E0213 19:19:20.425967 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Feb 13 19:19:20.526362 kubelet[2259]: I0213 19:19:20.526337 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:20.526431 kubelet[2259]: I0213 19:19:20.526369 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:20.526431 kubelet[2259]: I0213 19:19:20.526388 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:20.526431 kubelet[2259]: I0213 19:19:20.526403 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:20.526431 kubelet[2259]: I0213 19:19:20.526420 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:20.526524 kubelet[2259]: I0213 19:19:20.526435 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:20.526524 kubelet[2259]: I0213 19:19:20.526457 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:20.526524 kubelet[2259]: I0213 19:19:20.526471 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:20.526524 kubelet[2259]: I0213 19:19:20.526487 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:20.586742 kubelet[2259]: I0213 19:19:20.586703 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:19:20.587093 kubelet[2259]: E0213 19:19:20.587060 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 13 19:19:20.662102 kubelet[2259]: E0213 19:19:20.662074 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.662622 containerd[1509]: time="2025-02-13T19:19:20.662590194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:20.672867 kubelet[2259]: E0213 19:19:20.672834 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.673352 containerd[1509]: time="2025-02-13T19:19:20.673310168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:20.677581 kubelet[2259]: E0213 19:19:20.677545 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:20.677898 containerd[1509]: time="2025-02-13T19:19:20.677875279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1f5da22dfe533a022166d9acb35e974,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:20.827436 kubelet[2259]: E0213 19:19:20.827325 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Feb 13 19:19:20.989353 kubelet[2259]: I0213 19:19:20.989305 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:19:20.989695 kubelet[2259]: E0213 19:19:20.989647 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Feb 13 19:19:21.155535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671619534.mount: Deactivated successfully. Feb 13 19:19:21.162222 containerd[1509]: time="2025-02-13T19:19:21.162182058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:21.165074 containerd[1509]: time="2025-02-13T19:19:21.165016313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:19:21.165853 containerd[1509]: time="2025-02-13T19:19:21.165820160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:21.167662 containerd[1509]: time="2025-02-13T19:19:21.167618483Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:21.168428 containerd[1509]: time="2025-02-13T19:19:21.168373759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:19:21.169276 containerd[1509]: time="2025-02-13T19:19:21.169235405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:21.170125 containerd[1509]: time="2025-02-13T19:19:21.170085048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:19:21.171180 containerd[1509]: time="2025-02-13T19:19:21.171145257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:21.173609 containerd[1509]: time="2025-02-13T19:19:21.173569784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.893928ms" Feb 13 19:19:21.174744 containerd[1509]: time="2025-02-13T19:19:21.174712216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.304014ms" Feb 13 19:19:21.176806 containerd[1509]: time="2025-02-13T19:19:21.176783490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.861974ms" Feb 13 19:19:21.218809 kubelet[2259]: W0213 19:19:21.218711 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:21.218809 kubelet[2259]: E0213 19:19:21.218768 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:21.298885 containerd[1509]: time="2025-02-13T19:19:21.298585372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:21.298885 containerd[1509]: time="2025-02-13T19:19:21.298692383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:21.298885 containerd[1509]: time="2025-02-13T19:19:21.298709545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.298885 containerd[1509]: time="2025-02-13T19:19:21.298810755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.297547757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.298965746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.298982056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.299057087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.298994580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.299039955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.299053430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.299173 containerd[1509]: time="2025-02-13T19:19:21.299116078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:21.324089 systemd[1]: Started cri-containerd-383cb4c27e0ebc11014cb391130809cb726b3deb45323c748174b43f1a8dcc3f.scope - libcontainer container 383cb4c27e0ebc11014cb391130809cb726b3deb45323c748174b43f1a8dcc3f. Feb 13 19:19:21.325624 systemd[1]: Started cri-containerd-3e4aaaa634c737a14b2eb5c33d2f9ab629b28d057e268b21aa21640321388404.scope - libcontainer container 3e4aaaa634c737a14b2eb5c33d2f9ab629b28d057e268b21aa21640321388404. Feb 13 19:19:21.328372 systemd[1]: Started cri-containerd-729c0ad134c6a69ae197378c07626a50bfb0a35a95225c2d0fd7841a9447593c.scope - libcontainer container 729c0ad134c6a69ae197378c07626a50bfb0a35a95225c2d0fd7841a9447593c. Feb 13 19:19:21.364195 containerd[1509]: time="2025-02-13T19:19:21.364158806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1f5da22dfe533a022166d9acb35e974,Namespace:kube-system,Attempt:0,} returns sandbox id \"383cb4c27e0ebc11014cb391130809cb726b3deb45323c748174b43f1a8dcc3f\"" Feb 13 19:19:21.365382 kubelet[2259]: E0213 19:19:21.365360 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:21.367770 containerd[1509]: time="2025-02-13T19:19:21.367623213Z" level=info msg="CreateContainer within sandbox \"383cb4c27e0ebc11014cb391130809cb726b3deb45323c748174b43f1a8dcc3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:19:21.368658 containerd[1509]: time="2025-02-13T19:19:21.368592320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"729c0ad134c6a69ae197378c07626a50bfb0a35a95225c2d0fd7841a9447593c\"" Feb 13 19:19:21.369383 kubelet[2259]: E0213 19:19:21.369356 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:21.369845 containerd[1509]: time="2025-02-13T19:19:21.369821456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e4aaaa634c737a14b2eb5c33d2f9ab629b28d057e268b21aa21640321388404\"" Feb 13 19:19:21.370868 kubelet[2259]: E0213 19:19:21.370263 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:21.371251 containerd[1509]: time="2025-02-13T19:19:21.371228124Z" level=info msg="CreateContainer within sandbox \"729c0ad134c6a69ae197378c07626a50bfb0a35a95225c2d0fd7841a9447593c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:19:21.385034 containerd[1509]: time="2025-02-13T19:19:21.384996584Z" level=info msg="CreateContainer within sandbox \"3e4aaaa634c737a14b2eb5c33d2f9ab629b28d057e268b21aa21640321388404\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:19:21.388476 containerd[1509]: time="2025-02-13T19:19:21.388294770Z" level=info msg="CreateContainer within sandbox \"383cb4c27e0ebc11014cb391130809cb726b3deb45323c748174b43f1a8dcc3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d6745116885ceca49168b617e02c557ba4a5f58b3056b8a55524d0f5c1a81d6\"" Feb 13 19:19:21.388749 containerd[1509]: time="2025-02-13T19:19:21.388716430Z" level=info msg="StartContainer for \"4d6745116885ceca49168b617e02c557ba4a5f58b3056b8a55524d0f5c1a81d6\"" Feb 13 19:19:21.397429 containerd[1509]: time="2025-02-13T19:19:21.397405405Z" level=info msg="CreateContainer within sandbox \"729c0ad134c6a69ae197378c07626a50bfb0a35a95225c2d0fd7841a9447593c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa1a25ba342218e5aabc2c85e84c1eb36861189b82811f77727f41b479216244\"" Feb 13 19:19:21.397886 containerd[1509]: time="2025-02-13T19:19:21.397858405Z" level=info msg="StartContainer for \"aa1a25ba342218e5aabc2c85e84c1eb36861189b82811f77727f41b479216244\"" Feb 13 19:19:21.408218 kubelet[2259]: W0213 19:19:21.407406 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Feb 13 19:19:21.408218 kubelet[2259]: E0213 19:19:21.407469 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:21.411677 containerd[1509]: time="2025-02-13T19:19:21.411516358Z" level=info msg="CreateContainer within sandbox \"3e4aaaa634c737a14b2eb5c33d2f9ab629b28d057e268b21aa21640321388404\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b69f818abf30d31b75cf9805a9e3f799959b3fa88301a973afaab917443d9b40\"" Feb 13 19:19:21.412720 containerd[1509]: time="2025-02-13T19:19:21.411990738Z" level=info msg="StartContainer for \"b69f818abf30d31b75cf9805a9e3f799959b3fa88301a973afaab917443d9b40\"" Feb 13 19:19:21.416288 systemd[1]: Started cri-containerd-4d6745116885ceca49168b617e02c557ba4a5f58b3056b8a55524d0f5c1a81d6.scope - libcontainer container 4d6745116885ceca49168b617e02c557ba4a5f58b3056b8a55524d0f5c1a81d6. Feb 13 19:19:21.421071 systemd[1]: Started cri-containerd-aa1a25ba342218e5aabc2c85e84c1eb36861189b82811f77727f41b479216244.scope - libcontainer container aa1a25ba342218e5aabc2c85e84c1eb36861189b82811f77727f41b479216244. Feb 13 19:19:21.444060 systemd[1]: Started cri-containerd-b69f818abf30d31b75cf9805a9e3f799959b3fa88301a973afaab917443d9b40.scope - libcontainer container b69f818abf30d31b75cf9805a9e3f799959b3fa88301a973afaab917443d9b40. Feb 13 19:19:21.463074 containerd[1509]: time="2025-02-13T19:19:21.463025316Z" level=info msg="StartContainer for \"4d6745116885ceca49168b617e02c557ba4a5f58b3056b8a55524d0f5c1a81d6\" returns successfully" Feb 13 19:19:21.474967 containerd[1509]: time="2025-02-13T19:19:21.474179644Z" level=info msg="StartContainer for \"aa1a25ba342218e5aabc2c85e84c1eb36861189b82811f77727f41b479216244\" returns successfully" Feb 13 19:19:21.489042 containerd[1509]: time="2025-02-13T19:19:21.488988176Z" level=info msg="StartContainer for \"b69f818abf30d31b75cf9805a9e3f799959b3fa88301a973afaab917443d9b40\" returns successfully" Feb 13 19:19:21.794631 kubelet[2259]: I0213 19:19:21.794594 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:19:22.248662 kubelet[2259]: E0213 19:19:22.248277 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:22.248662 kubelet[2259]: E0213 19:19:22.248417 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:22.257465 kubelet[2259]: E0213 19:19:22.257427 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:22.257899 kubelet[2259]: E0213 19:19:22.257871 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:22.260498 kubelet[2259]: E0213 19:19:22.260471 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:22.260638 kubelet[2259]: E0213 19:19:22.260616 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:22.530740 kubelet[2259]: E0213 19:19:22.530582 2259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:19:22.631732 kubelet[2259]: I0213 19:19:22.631686 2259 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:19:22.631732 kubelet[2259]: E0213 19:19:22.631730 2259 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:19:22.635959 kubelet[2259]: E0213 19:19:22.635900 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:22.678669 kubelet[2259]: E0213 19:19:22.678551 2259 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dab6e8e62093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:19:20.221442195 +0000 UTC m=+0.594924137,LastTimestamp:2025-02-13 19:19:20.221442195 +0000 UTC m=+0.594924137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:19:22.732043 kubelet[2259]: E0213 19:19:22.731917 2259 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dab6e93896e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:19:20.226846439 +0000 UTC m=+0.600328381,LastTimestamp:2025-02-13 19:19:20.226846439 +0000 UTC m=+0.600328381,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:19:22.736018 kubelet[2259]: E0213 19:19:22.735969 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:22.784231 kubelet[2259]: E0213 19:19:22.784073 2259 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dab6ea120918 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:19:20.241096984 +0000 UTC m=+0.614578926,LastTimestamp:2025-02-13 19:19:20.241096984 +0000 UTC m=+0.614578926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:19:22.836524 kubelet[2259]: E0213 19:19:22.836498 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:22.936813 kubelet[2259]: E0213 19:19:22.936759 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.037906 kubelet[2259]: E0213 19:19:23.037790 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.138556 kubelet[2259]: E0213 19:19:23.138505 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.238872 kubelet[2259]: E0213 19:19:23.238829 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.261970 kubelet[2259]: E0213 19:19:23.261926 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:23.262059 kubelet[2259]: E0213 19:19:23.262039 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:19:23.262059 kubelet[2259]: E0213 19:19:23.262050 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:23.262188 kubelet[2259]: E0213 19:19:23.262173 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:23.339924 kubelet[2259]: E0213 19:19:23.339804 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.440543 kubelet[2259]: E0213 19:19:23.440513 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.541226 kubelet[2259]: E0213 19:19:23.541173 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.642107 kubelet[2259]: E0213 19:19:23.641984 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.742979 kubelet[2259]: E0213 19:19:23.742917 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.843580 kubelet[2259]: E0213 19:19:23.843529 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:23.944591 kubelet[2259]: E0213 19:19:23.944534 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:24.045256 kubelet[2259]: E0213 19:19:24.045221 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:24.146042 kubelet[2259]: E0213 19:19:24.145994 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:24.247197 kubelet[2259]: E0213 19:19:24.247073 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:24.326370 kubelet[2259]: I0213 19:19:24.326337 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:24.333308 kubelet[2259]: I0213 19:19:24.333255 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:24.337619 kubelet[2259]: I0213 19:19:24.337595 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:24.552925 kubelet[2259]: I0213 19:19:24.552823 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:24.559066 systemd[1]: Reload requested from client PID 2536 ('systemctl') (unit session-7.scope)... Feb 13 19:19:24.559081 systemd[1]: Reloading... Feb 13 19:19:24.562445 kubelet[2259]: E0213 19:19:24.562393 2259 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:24.562554 kubelet[2259]: E0213 19:19:24.562538 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:24.631968 zram_generator::config[2580]: No configuration found. Feb 13 19:19:24.809704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:19:24.938869 systemd[1]: Reloading finished in 379 ms. Feb 13 19:19:24.967136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:24.989449 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:19:24.989769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:24.989834 systemd[1]: kubelet.service: Consumed 961ms CPU time, 129.4M memory peak. Feb 13 19:19:24.999243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:25.173645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:25.178420 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:25.218246 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:25.218246 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:19:25.218246 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:25.218666 kubelet[2625]: I0213 19:19:25.218317 2625 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:19:25.224457 kubelet[2625]: I0213 19:19:25.224424 2625 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:19:25.224457 kubelet[2625]: I0213 19:19:25.224443 2625 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:19:25.224666 kubelet[2625]: I0213 19:19:25.224647 2625 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:19:25.225689 kubelet[2625]: I0213 19:19:25.225665 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:19:25.228809 kubelet[2625]: I0213 19:19:25.227960 2625 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:19:25.231195 kubelet[2625]: E0213 19:19:25.231167 2625 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:19:25.231195 kubelet[2625]: I0213 19:19:25.231192 2625 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:19:25.236257 kubelet[2625]: I0213 19:19:25.236232 2625 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:19:25.236493 kubelet[2625]: I0213 19:19:25.236454 2625 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:19:25.236653 kubelet[2625]: I0213 19:19:25.236481 2625 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:19:25.236653 kubelet[2625]: I0213 19:19:25.236644 2625 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:19:25.236653 kubelet[2625]: I0213 19:19:25.236652 2625 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:19:25.236797 kubelet[2625]: I0213 19:19:25.236690 2625 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:25.236863 kubelet[2625]: I0213 19:19:25.236833 2625 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:19:25.236893 kubelet[2625]: I0213 19:19:25.236870 2625 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:19:25.236893 kubelet[2625]: I0213 19:19:25.236892 2625 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:19:25.237124 kubelet[2625]: I0213 19:19:25.236902 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:19:25.237639 kubelet[2625]: I0213 19:19:25.237614 2625 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:19:25.238000 kubelet[2625]: I0213 19:19:25.237977 2625 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:19:25.238396 kubelet[2625]: I0213 19:19:25.238374 2625 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:19:25.238431 kubelet[2625]: I0213 19:19:25.238403 2625 server.go:1287] "Started kubelet" Feb 13 19:19:25.238951 kubelet[2625]: I0213 19:19:25.238510 2625 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:19:25.239969 kubelet[2625]: I0213 19:19:25.239844 2625 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:19:25.247101 kubelet[2625]: I0213 19:19:25.238678 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:19:25.248008 kubelet[2625]: I0213 19:19:25.240582 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:19:25.248008 kubelet[2625]: I0213 19:19:25.240598 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:19:25.248008 kubelet[2625]: I0213 19:19:25.247747 2625 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:19:25.248008 kubelet[2625]: E0213 19:19:25.247848 2625 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:19:25.248813 kubelet[2625]: I0213 19:19:25.248799 2625 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:19:25.249138 kubelet[2625]: I0213 19:19:25.249122 2625 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:19:25.249770 kubelet[2625]: I0213 19:19:25.249755 2625 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:19:25.251858 kubelet[2625]: E0213 19:19:25.251823 2625 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:19:25.251858 kubelet[2625]: I0213 19:19:25.252010 2625 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:19:25.251858 kubelet[2625]: I0213 19:19:25.252081 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:19:25.251858 kubelet[2625]: I0213 19:19:25.253014 2625 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:19:25.260022 kubelet[2625]: I0213 19:19:25.259920 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:19:25.261271 kubelet[2625]: I0213 19:19:25.261228 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:19:25.261271 kubelet[2625]: I0213 19:19:25.261261 2625 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:19:25.261337 kubelet[2625]: I0213 19:19:25.261283 2625 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:19:25.261337 kubelet[2625]: I0213 19:19:25.261292 2625 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:19:25.261383 kubelet[2625]: E0213 19:19:25.261337 2625 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:19:25.288033 kubelet[2625]: I0213 19:19:25.288006 2625 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:19:25.288033 kubelet[2625]: I0213 19:19:25.288024 2625 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:19:25.288033 kubelet[2625]: I0213 19:19:25.288042 2625 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:25.288211 kubelet[2625]: I0213 19:19:25.288194 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:19:25.288235 kubelet[2625]: I0213 19:19:25.288204 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:19:25.288235 kubelet[2625]: I0213 19:19:25.288220 2625 policy_none.go:49] "None policy: Start" Feb 13 19:19:25.288235 kubelet[2625]: I0213 19:19:25.288228 2625 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:19:25.288340 kubelet[2625]: I0213 19:19:25.288238 2625 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:19:25.288340 kubelet[2625]: I0213 19:19:25.288338 2625 state_mem.go:75] "Updated machine memory state" Feb 13 19:19:25.293112 kubelet[2625]: I0213 19:19:25.292690 2625 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:19:25.293112 kubelet[2625]: I0213 19:19:25.292862 2625 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:19:25.293112 kubelet[2625]: I0213 19:19:25.292872 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:19:25.293112 kubelet[2625]: I0213 19:19:25.293055 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:19:25.294042 kubelet[2625]: E0213 19:19:25.294016 2625 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:19:25.362643 kubelet[2625]: I0213 19:19:25.362596 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:25.362783 kubelet[2625]: I0213 19:19:25.362744 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.362783 kubelet[2625]: I0213 19:19:25.362762 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:25.368234 kubelet[2625]: E0213 19:19:25.368184 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:25.368311 kubelet[2625]: E0213 19:19:25.368265 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:25.368311 kubelet[2625]: E0213 19:19:25.368275 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.398653 kubelet[2625]: I0213 19:19:25.398622 2625 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:19:25.404971 kubelet[2625]: I0213 19:19:25.404928 2625 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:19:25.405073 kubelet[2625]: I0213 19:19:25.405029 2625 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:19:25.450668 kubelet[2625]: I0213 19:19:25.450630 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.450668 kubelet[2625]: I0213 19:19:25.450666 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.450838 kubelet[2625]: I0213 19:19:25.450685 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.450838 kubelet[2625]: I0213 19:19:25.450700 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.450838 kubelet[2625]: I0213 19:19:25.450728 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:25.450838 kubelet[2625]: I0213 19:19:25.450758 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:25.450838 kubelet[2625]: I0213 19:19:25.450773 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1f5da22dfe533a022166d9acb35e974-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1f5da22dfe533a022166d9acb35e974\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:25.450988 kubelet[2625]: I0213 19:19:25.450824 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:19:25.450988 kubelet[2625]: I0213 19:19:25.450840 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:25.669196 kubelet[2625]: E0213 19:19:25.669116 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.669331 kubelet[2625]: E0213 19:19:25.669197 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:25.669331 kubelet[2625]: E0213 19:19:25.669316 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:26.237751 kubelet[2625]: I0213 19:19:26.237670 2625 apiserver.go:52] "Watching apiserver" Feb 13 19:19:26.249287 kubelet[2625]: I0213 19:19:26.249251 2625 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:19:26.273379 kubelet[2625]: I0213 19:19:26.273342 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:26.273664 kubelet[2625]: I0213 19:19:26.273642 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:26.275160 kubelet[2625]: E0213 19:19:26.275137 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:26.288970 kubelet[2625]: E0213 19:19:26.288908 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:19:26.289164 kubelet[2625]: E0213 19:19:26.289139 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:26.295152 kubelet[2625]: E0213 19:19:26.295111 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:19:26.295359 kubelet[2625]: E0213 19:19:26.295333 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:26.332321 kubelet[2625]: I0213 19:19:26.332250 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.33223358 podStartE2EDuration="2.33223358s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:26.322858934 +0000 UTC m=+1.140654943" watchObservedRunningTime="2025-02-13 19:19:26.33223358 +0000 UTC m=+1.150029589" Feb 13 19:19:26.348913 kubelet[2625]: I0213 19:19:26.348853 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.348834537 podStartE2EDuration="2.348834537s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:26.338945808 +0000 UTC m=+1.156741817" watchObservedRunningTime="2025-02-13 19:19:26.348834537 +0000 UTC m=+1.166630546" Feb 13 19:19:27.274545 kubelet[2625]: E0213 19:19:27.274496 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:27.275051 kubelet[2625]: E0213 19:19:27.275037 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:27.618134 kubelet[2625]: E0213 19:19:27.618027 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:29.841566 sudo[1702]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:29.842998 sshd[1701]: Connection closed by 10.0.0.1 port 36636 Feb 13 19:19:29.843422 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:29.848327 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:36636.service: Deactivated successfully. Feb 13 19:19:29.850683 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:19:29.850906 systemd[1]: session-7.scope: Consumed 4.119s CPU time, 213.4M memory peak. Feb 13 19:19:29.852409 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:19:29.853427 systemd-logind[1499]: Removed session 7. Feb 13 19:19:31.666698 kubelet[2625]: I0213 19:19:31.666668 2625 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:19:31.667179 containerd[1509]: time="2025-02-13T19:19:31.667005933Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:19:31.667455 kubelet[2625]: I0213 19:19:31.667162 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:19:31.937324 kubelet[2625]: E0213 19:19:31.935721 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:31.948042 kubelet[2625]: I0213 19:19:31.947826 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.947809319 podStartE2EDuration="7.947809319s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:26.349210696 +0000 UTC m=+1.167006705" watchObservedRunningTime="2025-02-13 19:19:31.947809319 +0000 UTC m=+6.765605348" Feb 13 19:19:32.281418 kubelet[2625]: E0213 19:19:32.281391 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:32.377502 systemd[1]: Created slice kubepods-besteffort-pod4e38f292_8ace_4c9f_9931_b896e505c76d.slice - libcontainer container kubepods-besteffort-pod4e38f292_8ace_4c9f_9931_b896e505c76d.slice. Feb 13 19:19:32.395993 kubelet[2625]: I0213 19:19:32.395954 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e38f292-8ace-4c9f-9931-b896e505c76d-kube-proxy\") pod \"kube-proxy-wfnzl\" (UID: \"4e38f292-8ace-4c9f-9931-b896e505c76d\") " pod="kube-system/kube-proxy-wfnzl" Feb 13 19:19:32.396161 kubelet[2625]: I0213 19:19:32.396146 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e38f292-8ace-4c9f-9931-b896e505c76d-lib-modules\") pod \"kube-proxy-wfnzl\" (UID: \"4e38f292-8ace-4c9f-9931-b896e505c76d\") " pod="kube-system/kube-proxy-wfnzl" Feb 13 19:19:32.396216 kubelet[2625]: I0213 19:19:32.396165 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6kn\" (UniqueName: \"kubernetes.io/projected/4e38f292-8ace-4c9f-9931-b896e505c76d-kube-api-access-bv6kn\") pod \"kube-proxy-wfnzl\" (UID: \"4e38f292-8ace-4c9f-9931-b896e505c76d\") " pod="kube-system/kube-proxy-wfnzl" Feb 13 19:19:32.396216 kubelet[2625]: I0213 19:19:32.396187 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e38f292-8ace-4c9f-9931-b896e505c76d-xtables-lock\") pod \"kube-proxy-wfnzl\" (UID: \"4e38f292-8ace-4c9f-9931-b896e505c76d\") " pod="kube-system/kube-proxy-wfnzl" Feb 13 19:19:32.688408 kubelet[2625]: E0213 19:19:32.688279 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:32.688882 containerd[1509]: time="2025-02-13T19:19:32.688795277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfnzl,Uid:4e38f292-8ace-4c9f-9931-b896e505c76d,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:33.066346 kubelet[2625]: E0213 19:19:33.064024 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:33.066585 containerd[1509]: time="2025-02-13T19:19:33.066259685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:33.066585 containerd[1509]: time="2025-02-13T19:19:33.066368982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:33.066585 containerd[1509]: time="2025-02-13T19:19:33.066398999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:33.066585 containerd[1509]: time="2025-02-13T19:19:33.066520821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:33.080064 systemd[1]: Created slice kubepods-besteffort-pod543b6e12_4300_48da_b600_b1352dae9eb1.slice - libcontainer container kubepods-besteffort-pod543b6e12_4300_48da_b600_b1352dae9eb1.slice. Feb 13 19:19:33.098094 systemd[1]: Started cri-containerd-aed3868b86417a4831bbb70b91dcfd02aed43b7ecf2214db766a68c1d46959d2.scope - libcontainer container aed3868b86417a4831bbb70b91dcfd02aed43b7ecf2214db766a68c1d46959d2. Feb 13 19:19:33.100160 kubelet[2625]: I0213 19:19:33.100125 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/543b6e12-4300-48da-b600-b1352dae9eb1-var-lib-calico\") pod \"tigera-operator-7d68577dc5-xkm74\" (UID: \"543b6e12-4300-48da-b600-b1352dae9eb1\") " pod="tigera-operator/tigera-operator-7d68577dc5-xkm74" Feb 13 19:19:33.100521 kubelet[2625]: I0213 19:19:33.100454 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxkc\" (UniqueName: \"kubernetes.io/projected/543b6e12-4300-48da-b600-b1352dae9eb1-kube-api-access-dpxkc\") pod \"tigera-operator-7d68577dc5-xkm74\" (UID: \"543b6e12-4300-48da-b600-b1352dae9eb1\") " pod="tigera-operator/tigera-operator-7d68577dc5-xkm74" Feb 13 19:19:33.123397 containerd[1509]: time="2025-02-13T19:19:33.123318701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfnzl,Uid:4e38f292-8ace-4c9f-9931-b896e505c76d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed3868b86417a4831bbb70b91dcfd02aed43b7ecf2214db766a68c1d46959d2\"" Feb 13 19:19:33.124235 kubelet[2625]: E0213 19:19:33.124196 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:33.126147 containerd[1509]: time="2025-02-13T19:19:33.126108572Z" level=info msg="CreateContainer within sandbox \"aed3868b86417a4831bbb70b91dcfd02aed43b7ecf2214db766a68c1d46959d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:19:33.163599 containerd[1509]: time="2025-02-13T19:19:33.163539454Z" level=info msg="CreateContainer within sandbox \"aed3868b86417a4831bbb70b91dcfd02aed43b7ecf2214db766a68c1d46959d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b888ce9e9d9ece503e65f64290745385ee5f4a081bf4d509101d00ff213441f\"" Feb 13 19:19:33.164235 containerd[1509]: time="2025-02-13T19:19:33.164170934Z" level=info msg="StartContainer for \"2b888ce9e9d9ece503e65f64290745385ee5f4a081bf4d509101d00ff213441f\"" Feb 13 19:19:33.195084 systemd[1]: Started cri-containerd-2b888ce9e9d9ece503e65f64290745385ee5f4a081bf4d509101d00ff213441f.scope - libcontainer container 2b888ce9e9d9ece503e65f64290745385ee5f4a081bf4d509101d00ff213441f. Feb 13 19:19:33.270188 containerd[1509]: time="2025-02-13T19:19:33.270139162Z" level=info msg="StartContainer for \"2b888ce9e9d9ece503e65f64290745385ee5f4a081bf4d509101d00ff213441f\" returns successfully" Feb 13 19:19:33.284160 kubelet[2625]: E0213 19:19:33.284054 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:33.287031 kubelet[2625]: E0213 19:19:33.284927 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:33.287031 kubelet[2625]: E0213 19:19:33.285029 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:33.384177 containerd[1509]: time="2025-02-13T19:19:33.384081411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-xkm74,Uid:543b6e12-4300-48da-b600-b1352dae9eb1,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:19:33.410537 containerd[1509]: time="2025-02-13T19:19:33.410426039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:33.410537 containerd[1509]: time="2025-02-13T19:19:33.410509016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:33.410537 containerd[1509]: time="2025-02-13T19:19:33.410525187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:33.410781 containerd[1509]: time="2025-02-13T19:19:33.410661426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:33.441054 systemd[1]: Started cri-containerd-d62188532b95484ef71615cdd7abdf063e5cf87ab429c0f61839ad0fcc93a958.scope - libcontainer container d62188532b95484ef71615cdd7abdf063e5cf87ab429c0f61839ad0fcc93a958. Feb 13 19:19:33.476698 containerd[1509]: time="2025-02-13T19:19:33.476475748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-xkm74,Uid:543b6e12-4300-48da-b600-b1352dae9eb1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d62188532b95484ef71615cdd7abdf063e5cf87ab429c0f61839ad0fcc93a958\"" Feb 13 19:19:33.478287 containerd[1509]: time="2025-02-13T19:19:33.478250100Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:19:34.895834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486054215.mount: Deactivated successfully. Feb 13 19:19:35.076424 update_engine[1501]: I20250213 19:19:35.076329 1501 update_attempter.cc:509] Updating boot flags... Feb 13 19:19:35.100965 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2970) Feb 13 19:19:35.154013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2970) Feb 13 19:19:35.500418 containerd[1509]: time="2025-02-13T19:19:35.500362297Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:35.501345 containerd[1509]: time="2025-02-13T19:19:35.501304034Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:19:35.502712 containerd[1509]: time="2025-02-13T19:19:35.502683501Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:35.504843 containerd[1509]: time="2025-02-13T19:19:35.504812740Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:35.505496 containerd[1509]: time="2025-02-13T19:19:35.505454617Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.027171215s" Feb 13 19:19:35.505496 containerd[1509]: time="2025-02-13T19:19:35.505493551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:19:35.507511 containerd[1509]: time="2025-02-13T19:19:35.507488045Z" level=info msg="CreateContainer within sandbox \"d62188532b95484ef71615cdd7abdf063e5cf87ab429c0f61839ad0fcc93a958\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:19:35.518278 containerd[1509]: time="2025-02-13T19:19:35.518244480Z" level=info msg="CreateContainer within sandbox \"d62188532b95484ef71615cdd7abdf063e5cf87ab429c0f61839ad0fcc93a958\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec099eab52d5a91b7938582bf543fe3508532d3cb3d9554dd5d81398ebae93ca\"" Feb 13 19:19:35.518708 containerd[1509]: time="2025-02-13T19:19:35.518624031Z" level=info msg="StartContainer for \"ec099eab52d5a91b7938582bf543fe3508532d3cb3d9554dd5d81398ebae93ca\"" Feb 13 19:19:35.544103 systemd[1]: Started cri-containerd-ec099eab52d5a91b7938582bf543fe3508532d3cb3d9554dd5d81398ebae93ca.scope - libcontainer container ec099eab52d5a91b7938582bf543fe3508532d3cb3d9554dd5d81398ebae93ca. Feb 13 19:19:35.572947 containerd[1509]: time="2025-02-13T19:19:35.572884728Z" level=info msg="StartContainer for \"ec099eab52d5a91b7938582bf543fe3508532d3cb3d9554dd5d81398ebae93ca\" returns successfully" Feb 13 19:19:36.341195 kubelet[2625]: I0213 19:19:36.341093 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfnzl" podStartSLOduration=4.341075512 podStartE2EDuration="4.341075512s" podCreationTimestamp="2025-02-13 19:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:33.30229955 +0000 UTC m=+8.120095569" watchObservedRunningTime="2025-02-13 19:19:36.341075512 +0000 UTC m=+11.158871521" Feb 13 19:19:36.341726 kubelet[2625]: I0213 19:19:36.341214 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-xkm74" podStartSLOduration=1.312647422 podStartE2EDuration="3.341208112s" podCreationTimestamp="2025-02-13 19:19:33 +0000 UTC" firstStartedPulling="2025-02-13 19:19:33.477914543 +0000 UTC m=+8.295710552" lastFinishedPulling="2025-02-13 19:19:35.506475233 +0000 UTC m=+10.324271242" observedRunningTime="2025-02-13 19:19:36.340747439 +0000 UTC m=+11.158543448" watchObservedRunningTime="2025-02-13 19:19:36.341208112 +0000 UTC m=+11.159004131" Feb 13 19:19:37.622307 kubelet[2625]: E0213 19:19:37.622275 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:38.605859 systemd[1]: Created slice kubepods-besteffort-pod9599ddf1_711d_4612_a122_45f5678605c5.slice - libcontainer container kubepods-besteffort-pod9599ddf1_711d_4612_a122_45f5678605c5.slice. Feb 13 19:19:38.632582 systemd[1]: Created slice kubepods-besteffort-pod92d010c8_dd38_4b76_b458_a497efb2ac6f.slice - libcontainer container kubepods-besteffort-pod92d010c8_dd38_4b76_b458_a497efb2ac6f.slice. Feb 13 19:19:38.649173 kubelet[2625]: I0213 19:19:38.649036 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-flexvol-driver-host\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649173 kubelet[2625]: I0213 19:19:38.649081 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/92d010c8-dd38-4b76-b458-a497efb2ac6f-node-certs\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649173 kubelet[2625]: I0213 19:19:38.649099 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-cni-bin-dir\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649173 kubelet[2625]: I0213 19:19:38.649114 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-xtables-lock\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649173 kubelet[2625]: I0213 19:19:38.649142 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-policysync\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649716 kubelet[2625]: I0213 19:19:38.649175 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqt9\" (UniqueName: \"kubernetes.io/projected/9599ddf1-711d-4612-a122-45f5678605c5-kube-api-access-4lqt9\") pod \"calico-typha-67dc89864-6pl98\" (UID: \"9599ddf1-711d-4612-a122-45f5678605c5\") " pod="calico-system/calico-typha-67dc89864-6pl98" Feb 13 19:19:38.649716 kubelet[2625]: I0213 19:19:38.649218 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9599ddf1-711d-4612-a122-45f5678605c5-tigera-ca-bundle\") pod \"calico-typha-67dc89864-6pl98\" (UID: \"9599ddf1-711d-4612-a122-45f5678605c5\") " pod="calico-system/calico-typha-67dc89864-6pl98" Feb 13 19:19:38.649716 kubelet[2625]: I0213 19:19:38.649234 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9599ddf1-711d-4612-a122-45f5678605c5-typha-certs\") pod \"calico-typha-67dc89864-6pl98\" (UID: \"9599ddf1-711d-4612-a122-45f5678605c5\") " pod="calico-system/calico-typha-67dc89864-6pl98" Feb 13 19:19:38.649716 kubelet[2625]: I0213 19:19:38.649254 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-cni-net-dir\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649716 kubelet[2625]: I0213 19:19:38.649295 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92d010c8-dd38-4b76-b458-a497efb2ac6f-tigera-ca-bundle\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649876 kubelet[2625]: I0213 19:19:38.649313 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-var-run-calico\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649876 kubelet[2625]: I0213 19:19:38.649332 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-lib-modules\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649876 kubelet[2625]: I0213 19:19:38.649397 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-var-lib-calico\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649876 kubelet[2625]: I0213 19:19:38.649449 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/92d010c8-dd38-4b76-b458-a497efb2ac6f-cni-log-dir\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.649876 kubelet[2625]: I0213 19:19:38.649477 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7shp\" (UniqueName: \"kubernetes.io/projected/92d010c8-dd38-4b76-b458-a497efb2ac6f-kube-api-access-f7shp\") pod \"calico-node-llbc5\" (UID: \"92d010c8-dd38-4b76-b458-a497efb2ac6f\") " pod="calico-system/calico-node-llbc5" Feb 13 19:19:38.747429 kubelet[2625]: E0213 19:19:38.747378 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:38.752379 kubelet[2625]: E0213 19:19:38.752347 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.752379 kubelet[2625]: W0213 19:19:38.752372 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.752529 kubelet[2625]: E0213 19:19:38.752389 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.752851 kubelet[2625]: E0213 19:19:38.752691 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.752851 kubelet[2625]: W0213 19:19:38.752704 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.752851 kubelet[2625]: E0213 19:19:38.752717 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.752971 kubelet[2625]: E0213 19:19:38.752958 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.752971 kubelet[2625]: W0213 19:19:38.752967 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.753020 kubelet[2625]: E0213 19:19:38.752980 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.753530 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.755485 kubelet[2625]: W0213 19:19:38.753544 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.753558 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.753895 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.755485 kubelet[2625]: W0213 19:19:38.753903 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.753921 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.755070 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.755485 kubelet[2625]: W0213 19:19:38.755080 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.755099 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.755485 kubelet[2625]: E0213 19:19:38.755304 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.755729 kubelet[2625]: W0213 19:19:38.755312 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.755729 kubelet[2625]: E0213 19:19:38.755393 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.755964 kubelet[2625]: E0213 19:19:38.755896 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.755964 kubelet[2625]: W0213 19:19:38.755909 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.756061 kubelet[2625]: E0213 19:19:38.755965 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.756141 kubelet[2625]: E0213 19:19:38.756119 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.756141 kubelet[2625]: W0213 19:19:38.756130 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.756194 kubelet[2625]: E0213 19:19:38.756164 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.756334 kubelet[2625]: E0213 19:19:38.756305 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.756334 kubelet[2625]: W0213 19:19:38.756325 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.756443 kubelet[2625]: E0213 19:19:38.756411 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.756521 kubelet[2625]: E0213 19:19:38.756501 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.756521 kubelet[2625]: W0213 19:19:38.756509 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.756591 kubelet[2625]: E0213 19:19:38.756583 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.757595 kubelet[2625]: E0213 19:19:38.757577 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.757595 kubelet[2625]: W0213 19:19:38.757592 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.757812 kubelet[2625]: E0213 19:19:38.757697 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.757812 kubelet[2625]: E0213 19:19:38.757801 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.757812 kubelet[2625]: W0213 19:19:38.757810 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.758064 kubelet[2625]: E0213 19:19:38.757893 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.758064 kubelet[2625]: E0213 19:19:38.758044 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.758064 kubelet[2625]: W0213 19:19:38.758060 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.761373 kubelet[2625]: E0213 19:19:38.761274 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.764358 kubelet[2625]: E0213 19:19:38.763566 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.764358 kubelet[2625]: W0213 19:19:38.763584 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.764358 kubelet[2625]: E0213 19:19:38.764339 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.765039 kubelet[2625]: E0213 19:19:38.763892 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.765089 kubelet[2625]: W0213 19:19:38.765043 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.765121 kubelet[2625]: E0213 19:19:38.765109 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.767612 kubelet[2625]: E0213 19:19:38.767596 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.767658 kubelet[2625]: W0213 19:19:38.767611 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.767790 kubelet[2625]: E0213 19:19:38.767770 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.767946 kubelet[2625]: E0213 19:19:38.767908 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.768085 kubelet[2625]: W0213 19:19:38.767946 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.768125 kubelet[2625]: E0213 19:19:38.768050 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.768722 kubelet[2625]: E0213 19:19:38.768634 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.768722 kubelet[2625]: W0213 19:19:38.768651 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.768722 kubelet[2625]: E0213 19:19:38.768709 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.774309 kubelet[2625]: E0213 19:19:38.774241 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.774309 kubelet[2625]: W0213 19:19:38.774264 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.774415 kubelet[2625]: E0213 19:19:38.774327 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.774583 kubelet[2625]: E0213 19:19:38.774559 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.774583 kubelet[2625]: W0213 19:19:38.774577 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.774673 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.775776 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.777136 kubelet[2625]: W0213 19:19:38.775791 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.775874 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.776087 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.777136 kubelet[2625]: W0213 19:19:38.776098 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.776203 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.776608 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.777136 kubelet[2625]: W0213 19:19:38.776619 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.777136 kubelet[2625]: E0213 19:19:38.776761 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.777374 kubelet[2625]: E0213 19:19:38.776976 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.777374 kubelet[2625]: W0213 19:19:38.776989 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.777374 kubelet[2625]: E0213 19:19:38.777140 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.778342 kubelet[2625]: E0213 19:19:38.778323 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.778405 kubelet[2625]: W0213 19:19:38.778343 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.778540 kubelet[2625]: E0213 19:19:38.778476 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.778596 kubelet[2625]: E0213 19:19:38.778579 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.778625 kubelet[2625]: W0213 19:19:38.778595 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.778760 kubelet[2625]: E0213 19:19:38.778692 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.778807 kubelet[2625]: E0213 19:19:38.778800 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.778944 kubelet[2625]: W0213 19:19:38.778810 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.779597 kubelet[2625]: E0213 19:19:38.778969 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.779597 kubelet[2625]: E0213 19:19:38.779145 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.779597 kubelet[2625]: W0213 19:19:38.779154 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.779597 kubelet[2625]: E0213 19:19:38.779272 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.782023 kubelet[2625]: E0213 19:19:38.782000 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.782023 kubelet[2625]: W0213 19:19:38.782020 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.782237 kubelet[2625]: E0213 19:19:38.782166 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.782326 kubelet[2625]: E0213 19:19:38.782308 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.782362 kubelet[2625]: W0213 19:19:38.782326 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.782362 kubelet[2625]: E0213 19:19:38.782337 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.787614 kubelet[2625]: E0213 19:19:38.786078 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.787614 kubelet[2625]: W0213 19:19:38.786095 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.787614 kubelet[2625]: E0213 19:19:38.786109 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.828327 kubelet[2625]: E0213 19:19:38.828297 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.828327 kubelet[2625]: W0213 19:19:38.828315 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.828327 kubelet[2625]: E0213 19:19:38.828333 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.828751 kubelet[2625]: E0213 19:19:38.828647 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.828751 kubelet[2625]: W0213 19:19:38.828667 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.828751 kubelet[2625]: E0213 19:19:38.828690 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.829028 kubelet[2625]: E0213 19:19:38.828995 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.829028 kubelet[2625]: W0213 19:19:38.829012 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.829028 kubelet[2625]: E0213 19:19:38.829022 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.829310 kubelet[2625]: E0213 19:19:38.829285 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.829310 kubelet[2625]: W0213 19:19:38.829304 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.829363 kubelet[2625]: E0213 19:19:38.829314 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.829564 kubelet[2625]: E0213 19:19:38.829549 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.829564 kubelet[2625]: W0213 19:19:38.829561 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.829629 kubelet[2625]: E0213 19:19:38.829571 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.829812 kubelet[2625]: E0213 19:19:38.829787 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.829812 kubelet[2625]: W0213 19:19:38.829798 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.829812 kubelet[2625]: E0213 19:19:38.829807 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.830025 kubelet[2625]: E0213 19:19:38.830011 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.830025 kubelet[2625]: W0213 19:19:38.830022 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.830085 kubelet[2625]: E0213 19:19:38.830030 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.830244 kubelet[2625]: E0213 19:19:38.830229 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.830244 kubelet[2625]: W0213 19:19:38.830240 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.830291 kubelet[2625]: E0213 19:19:38.830248 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.830461 kubelet[2625]: E0213 19:19:38.830447 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.830461 kubelet[2625]: W0213 19:19:38.830458 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.830508 kubelet[2625]: E0213 19:19:38.830466 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.830674 kubelet[2625]: E0213 19:19:38.830660 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.830674 kubelet[2625]: W0213 19:19:38.830671 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.830723 kubelet[2625]: E0213 19:19:38.830679 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.830888 kubelet[2625]: E0213 19:19:38.830874 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.830888 kubelet[2625]: W0213 19:19:38.830884 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.830950 kubelet[2625]: E0213 19:19:38.830893 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.831125 kubelet[2625]: E0213 19:19:38.831110 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.831125 kubelet[2625]: W0213 19:19:38.831122 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.831175 kubelet[2625]: E0213 19:19:38.831130 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.831351 kubelet[2625]: E0213 19:19:38.831337 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.831351 kubelet[2625]: W0213 19:19:38.831348 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.831403 kubelet[2625]: E0213 19:19:38.831357 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.831560 kubelet[2625]: E0213 19:19:38.831546 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.831560 kubelet[2625]: W0213 19:19:38.831557 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.831609 kubelet[2625]: E0213 19:19:38.831565 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.831783 kubelet[2625]: E0213 19:19:38.831767 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.831813 kubelet[2625]: W0213 19:19:38.831788 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.831813 kubelet[2625]: E0213 19:19:38.831796 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.832015 kubelet[2625]: E0213 19:19:38.832002 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.832015 kubelet[2625]: W0213 19:19:38.832012 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.832068 kubelet[2625]: E0213 19:19:38.832021 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.832243 kubelet[2625]: E0213 19:19:38.832229 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.832243 kubelet[2625]: W0213 19:19:38.832240 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.832288 kubelet[2625]: E0213 19:19:38.832248 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.832448 kubelet[2625]: E0213 19:19:38.832435 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.832448 kubelet[2625]: W0213 19:19:38.832445 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.832500 kubelet[2625]: E0213 19:19:38.832453 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.832667 kubelet[2625]: E0213 19:19:38.832653 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.832667 kubelet[2625]: W0213 19:19:38.832665 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.832717 kubelet[2625]: E0213 19:19:38.832674 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.832881 kubelet[2625]: E0213 19:19:38.832867 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.832881 kubelet[2625]: W0213 19:19:38.832877 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.832927 kubelet[2625]: E0213 19:19:38.832885 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.850195 kubelet[2625]: E0213 19:19:38.850173 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.850195 kubelet[2625]: W0213 19:19:38.850188 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.850276 kubelet[2625]: E0213 19:19:38.850202 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.850276 kubelet[2625]: I0213 19:19:38.850227 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c0c44a2-2d4f-44a3-b176-d65ebad0fd01-socket-dir\") pod \"csi-node-driver-gj6hs\" (UID: \"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01\") " pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:38.850466 kubelet[2625]: E0213 19:19:38.850446 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.850466 kubelet[2625]: W0213 19:19:38.850457 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.850522 kubelet[2625]: E0213 19:19:38.850472 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.850522 kubelet[2625]: I0213 19:19:38.850488 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c0c44a2-2d4f-44a3-b176-d65ebad0fd01-registration-dir\") pod \"csi-node-driver-gj6hs\" (UID: \"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01\") " pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:38.850728 kubelet[2625]: E0213 19:19:38.850711 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.850728 kubelet[2625]: W0213 19:19:38.850721 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.850797 kubelet[2625]: E0213 19:19:38.850742 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.850797 kubelet[2625]: I0213 19:19:38.850755 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c0c44a2-2d4f-44a3-b176-d65ebad0fd01-kubelet-dir\") pod \"csi-node-driver-gj6hs\" (UID: \"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01\") " pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:38.851050 kubelet[2625]: E0213 19:19:38.851032 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.851086 kubelet[2625]: W0213 19:19:38.851049 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.851086 kubelet[2625]: E0213 19:19:38.851067 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.851288 kubelet[2625]: E0213 19:19:38.851273 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.851288 kubelet[2625]: W0213 19:19:38.851285 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.851335 kubelet[2625]: E0213 19:19:38.851299 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.851545 kubelet[2625]: E0213 19:19:38.851529 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.851545 kubelet[2625]: W0213 19:19:38.851544 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.851596 kubelet[2625]: E0213 19:19:38.851560 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.851782 kubelet[2625]: E0213 19:19:38.851764 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.851782 kubelet[2625]: W0213 19:19:38.851777 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.851844 kubelet[2625]: E0213 19:19:38.851795 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.852065 kubelet[2625]: E0213 19:19:38.852048 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.852065 kubelet[2625]: W0213 19:19:38.852063 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.852127 kubelet[2625]: E0213 19:19:38.852080 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.852127 kubelet[2625]: I0213 19:19:38.852109 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cplv\" (UniqueName: \"kubernetes.io/projected/4c0c44a2-2d4f-44a3-b176-d65ebad0fd01-kube-api-access-8cplv\") pod \"csi-node-driver-gj6hs\" (UID: \"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01\") " pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:38.852314 kubelet[2625]: E0213 19:19:38.852299 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.852314 kubelet[2625]: W0213 19:19:38.852312 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.852369 kubelet[2625]: E0213 19:19:38.852326 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.852546 kubelet[2625]: E0213 19:19:38.852536 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.852580 kubelet[2625]: W0213 19:19:38.852548 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.852580 kubelet[2625]: E0213 19:19:38.852563 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.852773 kubelet[2625]: E0213 19:19:38.852761 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.852773 kubelet[2625]: W0213 19:19:38.852771 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.852813 kubelet[2625]: E0213 19:19:38.852784 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.852813 kubelet[2625]: I0213 19:19:38.852801 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c0c44a2-2d4f-44a3-b176-d65ebad0fd01-varrun\") pod \"csi-node-driver-gj6hs\" (UID: \"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01\") " pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:38.853075 kubelet[2625]: E0213 19:19:38.853057 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.853075 kubelet[2625]: W0213 19:19:38.853073 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.853129 kubelet[2625]: E0213 19:19:38.853091 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.853330 kubelet[2625]: E0213 19:19:38.853314 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.853330 kubelet[2625]: W0213 19:19:38.853328 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.853382 kubelet[2625]: E0213 19:19:38.853342 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.853594 kubelet[2625]: E0213 19:19:38.853576 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.853594 kubelet[2625]: W0213 19:19:38.853592 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.853646 kubelet[2625]: E0213 19:19:38.853603 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.853822 kubelet[2625]: E0213 19:19:38.853806 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.853822 kubelet[2625]: W0213 19:19:38.853819 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.853876 kubelet[2625]: E0213 19:19:38.853828 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.909028 kubelet[2625]: E0213 19:19:38.908927 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:38.909717 containerd[1509]: time="2025-02-13T19:19:38.909394173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67dc89864-6pl98,Uid:9599ddf1-711d-4612-a122-45f5678605c5,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:38.936396 kubelet[2625]: E0213 19:19:38.936355 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:38.936828 containerd[1509]: time="2025-02-13T19:19:38.936776666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llbc5,Uid:92d010c8-dd38-4b76-b458-a497efb2ac6f,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:38.953594 kubelet[2625]: E0213 19:19:38.953541 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.953594 kubelet[2625]: W0213 19:19:38.953565 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.953594 kubelet[2625]: E0213 19:19:38.953584 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.953911 kubelet[2625]: E0213 19:19:38.953854 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.953911 kubelet[2625]: W0213 19:19:38.953862 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.953911 kubelet[2625]: E0213 19:19:38.953874 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.954232 kubelet[2625]: E0213 19:19:38.954201 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.954232 kubelet[2625]: W0213 19:19:38.954223 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.954401 kubelet[2625]: E0213 19:19:38.954251 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.954542 kubelet[2625]: E0213 19:19:38.954525 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.954542 kubelet[2625]: W0213 19:19:38.954536 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.954606 kubelet[2625]: E0213 19:19:38.954551 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.954817 kubelet[2625]: E0213 19:19:38.954797 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.954817 kubelet[2625]: W0213 19:19:38.954811 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.954921 kubelet[2625]: E0213 19:19:38.954826 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.955088 kubelet[2625]: E0213 19:19:38.955057 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.955088 kubelet[2625]: W0213 19:19:38.955072 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.955158 kubelet[2625]: E0213 19:19:38.955100 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.955319 kubelet[2625]: E0213 19:19:38.955309 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.955370 kubelet[2625]: W0213 19:19:38.955323 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.955370 kubelet[2625]: E0213 19:19:38.955351 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.955527 kubelet[2625]: E0213 19:19:38.955510 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.955527 kubelet[2625]: W0213 19:19:38.955520 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.955628 kubelet[2625]: E0213 19:19:38.955551 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.955719 kubelet[2625]: E0213 19:19:38.955704 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.955719 kubelet[2625]: W0213 19:19:38.955716 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.955818 kubelet[2625]: E0213 19:19:38.955751 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.955948 kubelet[2625]: E0213 19:19:38.955914 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.955948 kubelet[2625]: W0213 19:19:38.955925 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.956029 kubelet[2625]: E0213 19:19:38.956002 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.956159 kubelet[2625]: E0213 19:19:38.956144 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.956159 kubelet[2625]: W0213 19:19:38.956156 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.956223 kubelet[2625]: E0213 19:19:38.956212 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.956392 kubelet[2625]: E0213 19:19:38.956368 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.956392 kubelet[2625]: W0213 19:19:38.956380 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.956450 kubelet[2625]: E0213 19:19:38.956395 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.956669 kubelet[2625]: E0213 19:19:38.956651 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.956669 kubelet[2625]: W0213 19:19:38.956663 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.956767 kubelet[2625]: E0213 19:19:38.956676 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.957418 kubelet[2625]: E0213 19:19:38.957088 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.957418 kubelet[2625]: W0213 19:19:38.957102 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.957418 kubelet[2625]: E0213 19:19:38.957116 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.957512 kubelet[2625]: E0213 19:19:38.957497 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.957512 kubelet[2625]: W0213 19:19:38.957507 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.957621 kubelet[2625]: E0213 19:19:38.957596 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.957816 kubelet[2625]: E0213 19:19:38.957792 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.957816 kubelet[2625]: W0213 19:19:38.957807 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.957881 kubelet[2625]: E0213 19:19:38.957843 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.958278 kubelet[2625]: E0213 19:19:38.958256 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.958278 kubelet[2625]: W0213 19:19:38.958269 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.958374 kubelet[2625]: E0213 19:19:38.958308 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.958553 kubelet[2625]: E0213 19:19:38.958534 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.958553 kubelet[2625]: W0213 19:19:38.958547 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.958686 kubelet[2625]: E0213 19:19:38.958658 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.958789 kubelet[2625]: E0213 19:19:38.958772 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.958789 kubelet[2625]: W0213 19:19:38.958785 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.958872 kubelet[2625]: E0213 19:19:38.958799 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.959015 kubelet[2625]: E0213 19:19:38.958999 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.959015 kubelet[2625]: W0213 19:19:38.959010 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.959083 kubelet[2625]: E0213 19:19:38.959025 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.959360 kubelet[2625]: E0213 19:19:38.959323 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.959360 kubelet[2625]: W0213 19:19:38.959336 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.959360 kubelet[2625]: E0213 19:19:38.959352 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.959702 kubelet[2625]: E0213 19:19:38.959530 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.959702 kubelet[2625]: W0213 19:19:38.959545 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.959702 kubelet[2625]: E0213 19:19:38.959553 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.960251 kubelet[2625]: E0213 19:19:38.960234 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.960294 kubelet[2625]: W0213 19:19:38.960261 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.960330 kubelet[2625]: E0213 19:19:38.960317 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.960671 kubelet[2625]: E0213 19:19:38.960650 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.960671 kubelet[2625]: W0213 19:19:38.960665 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.960921 kubelet[2625]: E0213 19:19:38.960677 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.961083 kubelet[2625]: E0213 19:19:38.961063 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.961083 kubelet[2625]: W0213 19:19:38.961079 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.961152 kubelet[2625]: E0213 19:19:38.961092 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:38.964297 kubelet[2625]: E0213 19:19:38.964281 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:38.964297 kubelet[2625]: W0213 19:19:38.964295 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:38.964372 kubelet[2625]: E0213 19:19:38.964308 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:39.170235 containerd[1509]: time="2025-02-13T19:19:39.169357584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:39.170235 containerd[1509]: time="2025-02-13T19:19:39.169442114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:39.170235 containerd[1509]: time="2025-02-13T19:19:39.169457754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:39.170235 containerd[1509]: time="2025-02-13T19:19:39.169756048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:39.171676 containerd[1509]: time="2025-02-13T19:19:39.170384978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:39.171676 containerd[1509]: time="2025-02-13T19:19:39.170451554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:39.171676 containerd[1509]: time="2025-02-13T19:19:39.170462695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:39.171676 containerd[1509]: time="2025-02-13T19:19:39.170703902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:39.189070 systemd[1]: Started cri-containerd-8a505c185a3ac4a10a9b60a9e926f164550319130aad3ed1d95c1744ae6a9ccd.scope - libcontainer container 8a505c185a3ac4a10a9b60a9e926f164550319130aad3ed1d95c1744ae6a9ccd. Feb 13 19:19:39.192071 systemd[1]: Started cri-containerd-6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb.scope - libcontainer container 6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb. Feb 13 19:19:39.214725 containerd[1509]: time="2025-02-13T19:19:39.214688687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llbc5,Uid:92d010c8-dd38-4b76-b458-a497efb2ac6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\"" Feb 13 19:19:39.215509 kubelet[2625]: E0213 19:19:39.215485 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:39.216322 containerd[1509]: time="2025-02-13T19:19:39.216292441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:19:39.229249 containerd[1509]: time="2025-02-13T19:19:39.229215254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67dc89864-6pl98,Uid:9599ddf1-711d-4612-a122-45f5678605c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a505c185a3ac4a10a9b60a9e926f164550319130aad3ed1d95c1744ae6a9ccd\"" Feb 13 19:19:39.229798 kubelet[2625]: E0213 19:19:39.229772 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:40.262009 kubelet[2625]: E0213 19:19:40.261962 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:41.274497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646629492.mount: Deactivated successfully. Feb 13 19:19:41.341021 containerd[1509]: time="2025-02-13T19:19:41.340956278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:41.341837 containerd[1509]: time="2025-02-13T19:19:41.341800665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:19:41.343040 containerd[1509]: time="2025-02-13T19:19:41.342988130Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:41.346600 containerd[1509]: time="2025-02-13T19:19:41.346561303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:41.347084 containerd[1509]: time="2025-02-13T19:19:41.346780167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.130439875s" Feb 13 19:19:41.347084 containerd[1509]: time="2025-02-13T19:19:41.346880286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:19:41.348274 containerd[1509]: time="2025-02-13T19:19:41.348246608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:19:41.349624 containerd[1509]: time="2025-02-13T19:19:41.349587643Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:19:41.371457 containerd[1509]: time="2025-02-13T19:19:41.371407956Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de\"" Feb 13 19:19:41.371886 containerd[1509]: time="2025-02-13T19:19:41.371859179Z" level=info msg="StartContainer for \"403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de\"" Feb 13 19:19:41.397063 systemd[1]: Started cri-containerd-403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de.scope - libcontainer container 403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de. Feb 13 19:19:41.429653 containerd[1509]: time="2025-02-13T19:19:41.429608281Z" level=info msg="StartContainer for \"403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de\" returns successfully" Feb 13 19:19:41.440888 systemd[1]: cri-containerd-403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de.scope: Deactivated successfully. Feb 13 19:19:41.519164 containerd[1509]: time="2025-02-13T19:19:41.519089832Z" level=info msg="shim disconnected" id=403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de namespace=k8s.io Feb 13 19:19:41.519164 containerd[1509]: time="2025-02-13T19:19:41.519151769Z" level=warning msg="cleaning up after shim disconnected" id=403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de namespace=k8s.io Feb 13 19:19:41.519164 containerd[1509]: time="2025-02-13T19:19:41.519161697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:42.246802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-403dea8639c85325d4548aa50c95cfd49ad5c545df13fdfcaa65415b4d20d3de-rootfs.mount: Deactivated successfully. Feb 13 19:19:42.261878 kubelet[2625]: E0213 19:19:42.261831 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:42.301948 kubelet[2625]: E0213 19:19:42.301904 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:44.261784 kubelet[2625]: E0213 19:19:44.261732 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:44.897757 containerd[1509]: time="2025-02-13T19:19:44.897701564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:44.937878 containerd[1509]: time="2025-02-13T19:19:44.937788589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:19:45.000763 containerd[1509]: time="2025-02-13T19:19:45.000710054Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:45.016752 containerd[1509]: time="2025-02-13T19:19:45.016640280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:45.017243 containerd[1509]: time="2025-02-13T19:19:45.017206289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.668930595s" Feb 13 19:19:45.017243 containerd[1509]: time="2025-02-13T19:19:45.017235804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:19:45.018237 containerd[1509]: time="2025-02-13T19:19:45.018210092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:19:45.025654 containerd[1509]: time="2025-02-13T19:19:45.025616406Z" level=info msg="CreateContainer within sandbox \"8a505c185a3ac4a10a9b60a9e926f164550319130aad3ed1d95c1744ae6a9ccd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:19:45.216439 containerd[1509]: time="2025-02-13T19:19:45.216377305Z" level=info msg="CreateContainer within sandbox \"8a505c185a3ac4a10a9b60a9e926f164550319130aad3ed1d95c1744ae6a9ccd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"126e13451389fe951238beb2a673d6fe9937a06e269c0f4a26c8a728cd1b7e96\"" Feb 13 19:19:45.217213 containerd[1509]: time="2025-02-13T19:19:45.217095450Z" level=info msg="StartContainer for \"126e13451389fe951238beb2a673d6fe9937a06e269c0f4a26c8a728cd1b7e96\"" Feb 13 19:19:45.246193 systemd[1]: Started cri-containerd-126e13451389fe951238beb2a673d6fe9937a06e269c0f4a26c8a728cd1b7e96.scope - libcontainer container 126e13451389fe951238beb2a673d6fe9937a06e269c0f4a26c8a728cd1b7e96. Feb 13 19:19:45.291799 containerd[1509]: time="2025-02-13T19:19:45.291744113Z" level=info msg="StartContainer for \"126e13451389fe951238beb2a673d6fe9937a06e269c0f4a26c8a728cd1b7e96\" returns successfully" Feb 13 19:19:45.308096 kubelet[2625]: E0213 19:19:45.308063 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:45.321497 kubelet[2625]: I0213 19:19:45.321411 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67dc89864-6pl98" podStartSLOduration=1.533468418 podStartE2EDuration="7.321393004s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:19:39.230149701 +0000 UTC m=+14.047945710" lastFinishedPulling="2025-02-13 19:19:45.018074286 +0000 UTC m=+19.835870296" observedRunningTime="2025-02-13 19:19:45.320129189 +0000 UTC m=+20.137925198" watchObservedRunningTime="2025-02-13 19:19:45.321393004 +0000 UTC m=+20.139189013" Feb 13 19:19:46.262493 kubelet[2625]: E0213 19:19:46.262431 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:46.308733 kubelet[2625]: I0213 19:19:46.308695 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:19:46.309152 kubelet[2625]: E0213 19:19:46.309026 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:48.261855 kubelet[2625]: E0213 19:19:48.261816 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:50.261682 kubelet[2625]: E0213 19:19:50.261622 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:51.832189 containerd[1509]: time="2025-02-13T19:19:51.832119375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:51.833121 containerd[1509]: time="2025-02-13T19:19:51.833078010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:19:51.834821 containerd[1509]: time="2025-02-13T19:19:51.834758465Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:51.838306 containerd[1509]: time="2025-02-13T19:19:51.838254538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:51.839015 containerd[1509]: time="2025-02-13T19:19:51.838967029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.820725248s" Feb 13 19:19:51.839015 containerd[1509]: time="2025-02-13T19:19:51.839007606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:19:51.841167 containerd[1509]: time="2025-02-13T19:19:51.841131215Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:19:51.861499 containerd[1509]: time="2025-02-13T19:19:51.861432794Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98\"" Feb 13 19:19:51.862162 containerd[1509]: time="2025-02-13T19:19:51.862125139Z" level=info msg="StartContainer for \"75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98\"" Feb 13 19:19:51.890253 systemd[1]: run-containerd-runc-k8s.io-75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98-runc.bL9ddZ.mount: Deactivated successfully. Feb 13 19:19:51.900086 systemd[1]: Started cri-containerd-75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98.scope - libcontainer container 75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98. Feb 13 19:19:51.933925 containerd[1509]: time="2025-02-13T19:19:51.933883045Z" level=info msg="StartContainer for \"75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98\" returns successfully" Feb 13 19:19:52.262146 kubelet[2625]: E0213 19:19:52.262111 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:52.320305 kubelet[2625]: E0213 19:19:52.320281 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:53.273020 systemd[1]: cri-containerd-75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98.scope: Deactivated successfully. Feb 13 19:19:53.273419 systemd[1]: cri-containerd-75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98.scope: Consumed 550ms CPU time, 157.4M memory peak, 8K read from disk, 151M written to disk. Feb 13 19:19:53.293807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98-rootfs.mount: Deactivated successfully. Feb 13 19:19:53.322727 kubelet[2625]: E0213 19:19:53.322678 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:53.350225 kubelet[2625]: I0213 19:19:53.350188 2625 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:19:53.524105 systemd[1]: Created slice kubepods-burstable-podd78af842_8204_4eb8_8b0d_729f562f41c9.slice - libcontainer container kubepods-burstable-podd78af842_8204_4eb8_8b0d_729f562f41c9.slice. Feb 13 19:19:53.567649 kubelet[2625]: I0213 19:19:53.567600 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnlg\" (UniqueName: \"kubernetes.io/projected/d78af842-8204-4eb8-8b0d-729f562f41c9-kube-api-access-4vnlg\") pod \"coredns-668d6bf9bc-szpfw\" (UID: \"d78af842-8204-4eb8-8b0d-729f562f41c9\") " pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:53.567649 kubelet[2625]: I0213 19:19:53.567640 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d78af842-8204-4eb8-8b0d-729f562f41c9-config-volume\") pod \"coredns-668d6bf9bc-szpfw\" (UID: \"d78af842-8204-4eb8-8b0d-729f562f41c9\") " pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:53.648015 systemd[1]: Created slice kubepods-besteffort-pod445077cf_6de7_4ccc_a14d_002ec401e21f.slice - libcontainer container kubepods-besteffort-pod445077cf_6de7_4ccc_a14d_002ec401e21f.slice. Feb 13 19:19:53.653242 systemd[1]: Created slice kubepods-besteffort-pod853008d7_8935_4029_ae11_bd5e471b4687.slice - libcontainer container kubepods-besteffort-pod853008d7_8935_4029_ae11_bd5e471b4687.slice. Feb 13 19:19:53.657419 systemd[1]: Created slice kubepods-burstable-pod2dbb4619_b530_4c82_b5dd_b3f7d0fb4c0e.slice - libcontainer container kubepods-burstable-pod2dbb4619_b530_4c82_b5dd_b3f7d0fb4c0e.slice. Feb 13 19:19:53.661235 systemd[1]: Created slice kubepods-besteffort-pod12aa1040_68e2_4470_b0af_95b247e00e85.slice - libcontainer container kubepods-besteffort-pod12aa1040_68e2_4470_b0af_95b247e00e85.slice. Feb 13 19:19:53.769208 kubelet[2625]: I0213 19:19:53.769163 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wrj\" (UniqueName: \"kubernetes.io/projected/2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e-kube-api-access-h5wrj\") pod \"coredns-668d6bf9bc-dqfv5\" (UID: \"2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e\") " pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:53.769208 kubelet[2625]: I0213 19:19:53.769211 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/853008d7-8935-4029-ae11-bd5e471b4687-calico-apiserver-certs\") pod \"calico-apiserver-7cd559d499-qrdn4\" (UID: \"853008d7-8935-4029-ae11-bd5e471b4687\") " pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:53.769407 kubelet[2625]: I0213 19:19:53.769232 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e-config-volume\") pod \"coredns-668d6bf9bc-dqfv5\" (UID: \"2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e\") " pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:53.769407 kubelet[2625]: I0213 19:19:53.769255 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/445077cf-6de7-4ccc-a14d-002ec401e21f-calico-apiserver-certs\") pod \"calico-apiserver-7cd559d499-bldmf\" (UID: \"445077cf-6de7-4ccc-a14d-002ec401e21f\") " pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:53.769407 kubelet[2625]: I0213 19:19:53.769284 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfw6\" (UniqueName: \"kubernetes.io/projected/853008d7-8935-4029-ae11-bd5e471b4687-kube-api-access-6tfw6\") pod \"calico-apiserver-7cd559d499-qrdn4\" (UID: \"853008d7-8935-4029-ae11-bd5e471b4687\") " pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:53.769407 kubelet[2625]: I0213 19:19:53.769317 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12aa1040-68e2-4470-b0af-95b247e00e85-tigera-ca-bundle\") pod \"calico-kube-controllers-5467b9d745-75rrp\" (UID: \"12aa1040-68e2-4470-b0af-95b247e00e85\") " pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:53.769407 kubelet[2625]: I0213 19:19:53.769340 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bgv\" (UniqueName: \"kubernetes.io/projected/12aa1040-68e2-4470-b0af-95b247e00e85-kube-api-access-d9bgv\") pod \"calico-kube-controllers-5467b9d745-75rrp\" (UID: \"12aa1040-68e2-4470-b0af-95b247e00e85\") " pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:53.769555 kubelet[2625]: I0213 19:19:53.769387 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpv9\" (UniqueName: \"kubernetes.io/projected/445077cf-6de7-4ccc-a14d-002ec401e21f-kube-api-access-5dpv9\") pod \"calico-apiserver-7cd559d499-bldmf\" (UID: \"445077cf-6de7-4ccc-a14d-002ec401e21f\") " pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:53.774128 containerd[1509]: time="2025-02-13T19:19:53.774072678Z" level=info msg="shim disconnected" id=75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98 namespace=k8s.io Feb 13 19:19:53.774128 containerd[1509]: time="2025-02-13T19:19:53.774125297Z" level=warning msg="cleaning up after shim disconnected" id=75cd95b5ff875cf4ff70c1ea1105f83db60522c4c876c1ec2b9af82dd3c29a98 namespace=k8s.io Feb 13 19:19:53.774606 containerd[1509]: time="2025-02-13T19:19:53.774134355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:53.828022 kubelet[2625]: E0213 19:19:53.827917 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:53.850885 containerd[1509]: time="2025-02-13T19:19:53.850839145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:53.964738 containerd[1509]: time="2025-02-13T19:19:53.964445361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:54.057811 containerd[1509]: time="2025-02-13T19:19:54.057615176Z" level=error msg="Failed to destroy network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.058436 containerd[1509]: time="2025-02-13T19:19:54.058398360Z" level=error msg="encountered an error cleaning up failed sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.058504 containerd[1509]: time="2025-02-13T19:19:54.058472790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.058762 kubelet[2625]: E0213 19:19:54.058715 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.058823 kubelet[2625]: E0213 19:19:54.058794 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:54.058857 kubelet[2625]: E0213 19:19:54.058818 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:54.058881 kubelet[2625]: E0213 19:19:54.058862 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:19:54.060829 containerd[1509]: time="2025-02-13T19:19:54.060779921Z" level=error msg="Failed to destroy network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.061283 containerd[1509]: time="2025-02-13T19:19:54.061241830Z" level=error msg="encountered an error cleaning up failed sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.061359 containerd[1509]: time="2025-02-13T19:19:54.061321681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.061719 kubelet[2625]: E0213 19:19:54.061543 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.061719 kubelet[2625]: E0213 19:19:54.061597 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:54.061719 kubelet[2625]: E0213 19:19:54.061620 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:54.061850 kubelet[2625]: E0213 19:19:54.061665 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:19:54.251744 containerd[1509]: time="2025-02-13T19:19:54.251698088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:19:54.256543 containerd[1509]: time="2025-02-13T19:19:54.256490457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:19:54.260022 kubelet[2625]: E0213 19:19:54.259909 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:54.260700 containerd[1509]: time="2025-02-13T19:19:54.260457151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:54.268161 systemd[1]: Created slice kubepods-besteffort-pod4c0c44a2_2d4f_44a3_b176_d65ebad0fd01.slice - libcontainer container kubepods-besteffort-pod4c0c44a2_2d4f_44a3_b176_d65ebad0fd01.slice. Feb 13 19:19:54.271380 containerd[1509]: time="2025-02-13T19:19:54.271348055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:54.326344 kubelet[2625]: I0213 19:19:54.326245 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249" Feb 13 19:19:54.331532 containerd[1509]: time="2025-02-13T19:19:54.331142753Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:19:54.331532 containerd[1509]: time="2025-02-13T19:19:54.331363247Z" level=info msg="Ensure that sandbox d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249 in task-service has been cleanup successfully" Feb 13 19:19:54.334575 systemd[1]: run-netns-cni\x2d280cdea3\x2d6e5c\x2d5b8c\x2de074\x2d3b97b2b9cf08.mount: Deactivated successfully. Feb 13 19:19:54.336658 containerd[1509]: time="2025-02-13T19:19:54.336189319Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:19:54.336658 containerd[1509]: time="2025-02-13T19:19:54.336220397Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:19:54.336739 kubelet[2625]: E0213 19:19:54.336488 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:54.338390 containerd[1509]: time="2025-02-13T19:19:54.338349864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:1,}" Feb 13 19:19:54.341487 kubelet[2625]: E0213 19:19:54.341219 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:54.342152 containerd[1509]: time="2025-02-13T19:19:54.342127132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:19:54.343582 kubelet[2625]: I0213 19:19:54.343519 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7" Feb 13 19:19:54.346746 containerd[1509]: time="2025-02-13T19:19:54.346713963Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:19:54.347048 containerd[1509]: time="2025-02-13T19:19:54.346915302Z" level=info msg="Ensure that sandbox 012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7 in task-service has been cleanup successfully" Feb 13 19:19:54.347816 containerd[1509]: time="2025-02-13T19:19:54.347772726Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:19:54.347816 containerd[1509]: time="2025-02-13T19:19:54.347789828Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:19:54.348956 containerd[1509]: time="2025-02-13T19:19:54.348541222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:1,}" Feb 13 19:19:54.370602 containerd[1509]: time="2025-02-13T19:19:54.370552380Z" level=error msg="Failed to destroy network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.372200 containerd[1509]: time="2025-02-13T19:19:54.372167590Z" level=error msg="encountered an error cleaning up failed sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.372579 containerd[1509]: time="2025-02-13T19:19:54.372226872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.372676 kubelet[2625]: E0213 19:19:54.372547 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.372676 kubelet[2625]: E0213 19:19:54.372606 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:54.372676 kubelet[2625]: E0213 19:19:54.372632 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:54.372765 kubelet[2625]: E0213 19:19:54.372680 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:19:54.391897 containerd[1509]: time="2025-02-13T19:19:54.391699183Z" level=error msg="Failed to destroy network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.392437 containerd[1509]: time="2025-02-13T19:19:54.392329339Z" level=error msg="encountered an error cleaning up failed sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.392437 containerd[1509]: time="2025-02-13T19:19:54.392405072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.392894 kubelet[2625]: E0213 19:19:54.392861 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.393035 kubelet[2625]: E0213 19:19:54.393020 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:54.393344 kubelet[2625]: E0213 19:19:54.393085 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:54.393344 kubelet[2625]: E0213 19:19:54.393129 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:19:54.393673 containerd[1509]: time="2025-02-13T19:19:54.393587476Z" level=error msg="Failed to destroy network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.394154 containerd[1509]: time="2025-02-13T19:19:54.394117294Z" level=error msg="encountered an error cleaning up failed sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.394257 containerd[1509]: time="2025-02-13T19:19:54.394239975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.394461 kubelet[2625]: E0213 19:19:54.394444 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.394651 kubelet[2625]: E0213 19:19:54.394589 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:54.394651 kubelet[2625]: E0213 19:19:54.394647 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:54.394837 kubelet[2625]: E0213 19:19:54.394685 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:19:54.408510 containerd[1509]: time="2025-02-13T19:19:54.408355586Z" level=error msg="Failed to destroy network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.408954 containerd[1509]: time="2025-02-13T19:19:54.408914848Z" level=error msg="encountered an error cleaning up failed sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.409084 containerd[1509]: time="2025-02-13T19:19:54.409060412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.409331 kubelet[2625]: E0213 19:19:54.409282 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.409400 kubelet[2625]: E0213 19:19:54.409349 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:54.409400 kubelet[2625]: E0213 19:19:54.409372 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:54.409455 kubelet[2625]: E0213 19:19:54.409411 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:54.434785 containerd[1509]: time="2025-02-13T19:19:54.434736337Z" level=error msg="Failed to destroy network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435146 containerd[1509]: time="2025-02-13T19:19:54.435116513Z" level=error msg="encountered an error cleaning up failed sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435217 containerd[1509]: time="2025-02-13T19:19:54.435197004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435426 containerd[1509]: time="2025-02-13T19:19:54.435349651Z" level=error msg="Failed to destroy network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435472 kubelet[2625]: E0213 19:19:54.435400 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435472 kubelet[2625]: E0213 19:19:54.435464 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:54.435569 kubelet[2625]: E0213 19:19:54.435491 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:54.435597 kubelet[2625]: E0213 19:19:54.435558 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:19:54.435743 containerd[1509]: time="2025-02-13T19:19:54.435721772Z" level=error msg="encountered an error cleaning up failed sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.435822 containerd[1509]: time="2025-02-13T19:19:54.435790000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.436091 kubelet[2625]: E0213 19:19:54.436052 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:54.436152 kubelet[2625]: E0213 19:19:54.436106 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:54.436152 kubelet[2625]: E0213 19:19:54.436132 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:54.436238 kubelet[2625]: E0213 19:19:54.436170 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:19:54.583588 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:35314.service - OpenSSH per-connection server daemon (10.0.0.1:35314). Feb 13 19:19:54.640410 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 35314 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:19:54.642186 sshd-session[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:54.646852 systemd-logind[1499]: New session 8 of user core. Feb 13 19:19:54.665070 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:19:54.796571 sshd[3712]: Connection closed by 10.0.0.1 port 35314 Feb 13 19:19:54.796879 sshd-session[3710]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:54.800449 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:35314.service: Deactivated successfully. Feb 13 19:19:54.802321 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:19:54.802992 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:19:54.803805 systemd-logind[1499]: Removed session 8. Feb 13 19:19:55.294926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0-shm.mount: Deactivated successfully. Feb 13 19:19:55.295046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd-shm.mount: Deactivated successfully. Feb 13 19:19:55.295127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9-shm.mount: Deactivated successfully. Feb 13 19:19:55.295204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d-shm.mount: Deactivated successfully. Feb 13 19:19:55.295281 systemd[1]: run-netns-cni\x2df4f10d1f\x2d22f9\x2d295f\x2d684b\x2dda5ea2443bca.mount: Deactivated successfully. Feb 13 19:19:55.345955 kubelet[2625]: I0213 19:19:55.345311 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157" Feb 13 19:19:55.346347 containerd[1509]: time="2025-02-13T19:19:55.345677317Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:19:55.346347 containerd[1509]: time="2025-02-13T19:19:55.345865221Z" level=info msg="Ensure that sandbox a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157 in task-service has been cleanup successfully" Feb 13 19:19:55.348045 kubelet[2625]: I0213 19:19:55.347727 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f" Feb 13 19:19:55.348157 systemd[1]: run-netns-cni\x2dccff9b30\x2dbd20\x2de0d0\x2de75e\x2d2421a786bdfb.mount: Deactivated successfully. Feb 13 19:19:55.349201 containerd[1509]: time="2025-02-13T19:19:55.348245779Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:19:55.349201 containerd[1509]: time="2025-02-13T19:19:55.348429495Z" level=info msg="Ensure that sandbox 579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f in task-service has been cleanup successfully" Feb 13 19:19:55.349379 containerd[1509]: time="2025-02-13T19:19:55.349304061Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:19:55.349379 containerd[1509]: time="2025-02-13T19:19:55.349321804Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:19:55.349454 containerd[1509]: time="2025-02-13T19:19:55.349432242Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:19:55.349454 containerd[1509]: time="2025-02-13T19:19:55.349448763Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:19:55.349624 containerd[1509]: time="2025-02-13T19:19:55.349602913Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:19:55.349652 kubelet[2625]: I0213 19:19:55.349625 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd" Feb 13 19:19:55.349715 containerd[1509]: time="2025-02-13T19:19:55.349688494Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:19:55.349715 containerd[1509]: time="2025-02-13T19:19:55.349704353Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:19:55.350057 containerd[1509]: time="2025-02-13T19:19:55.350025938Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:19:55.350592 containerd[1509]: time="2025-02-13T19:19:55.350262614Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:19:55.350592 containerd[1509]: time="2025-02-13T19:19:55.350278093Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:19:55.350592 containerd[1509]: time="2025-02-13T19:19:55.350089569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:2,}" Feb 13 19:19:55.350592 containerd[1509]: time="2025-02-13T19:19:55.350149371Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:19:55.350765 containerd[1509]: time="2025-02-13T19:19:55.350744039Z" level=info msg="Ensure that sandbox d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd in task-service has been cleanup successfully" Feb 13 19:19:55.351008 containerd[1509]: time="2025-02-13T19:19:55.350986987Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:19:55.351054 containerd[1509]: time="2025-02-13T19:19:55.351006643Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:19:55.351576 systemd[1]: run-netns-cni\x2ddb573515\x2d63ef\x2da944\x2d28e7\x2daca9009efe4f.mount: Deactivated successfully. Feb 13 19:19:55.354979 systemd[1]: run-netns-cni\x2d12a1db7e\x2d96da\x2dfae4\x2db815\x2dcecde375e559.mount: Deactivated successfully. Feb 13 19:19:55.356172 kubelet[2625]: E0213 19:19:55.356040 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:55.356241 kubelet[2625]: E0213 19:19:55.356183 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:55.358738 kubelet[2625]: I0213 19:19:55.358705 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9" Feb 13 19:19:55.359063 containerd[1509]: time="2025-02-13T19:19:55.359024087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:2,}" Feb 13 19:19:55.359297 containerd[1509]: time="2025-02-13T19:19:55.359027343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:1,}" Feb 13 19:19:55.359762 containerd[1509]: time="2025-02-13T19:19:55.359735385Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:19:55.360018 containerd[1509]: time="2025-02-13T19:19:55.359920263Z" level=info msg="Ensure that sandbox a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9 in task-service has been cleanup successfully" Feb 13 19:19:55.360163 containerd[1509]: time="2025-02-13T19:19:55.360140478Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:19:55.360163 containerd[1509]: time="2025-02-13T19:19:55.360157049Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:19:55.361507 kubelet[2625]: I0213 19:19:55.361439 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0" Feb 13 19:19:55.362252 containerd[1509]: time="2025-02-13T19:19:55.361918953Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:19:55.362252 containerd[1509]: time="2025-02-13T19:19:55.362123308Z" level=info msg="Ensure that sandbox 9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0 in task-service has been cleanup successfully" Feb 13 19:19:55.362650 containerd[1509]: time="2025-02-13T19:19:55.362618470Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:19:55.362739 systemd[1]: run-netns-cni\x2df1991c62\x2ddeaa\x2d12a3\x2d2c87\x2dbd45ae5fd39a.mount: Deactivated successfully. Feb 13 19:19:55.363007 containerd[1509]: time="2025-02-13T19:19:55.362984518Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:19:55.363085 containerd[1509]: time="2025-02-13T19:19:55.362841118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:19:55.363804 containerd[1509]: time="2025-02-13T19:19:55.363776898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:1,}" Feb 13 19:19:55.364073 kubelet[2625]: I0213 19:19:55.364049 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d" Feb 13 19:19:55.364459 containerd[1509]: time="2025-02-13T19:19:55.364434636Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:19:55.364659 containerd[1509]: time="2025-02-13T19:19:55.364629012Z" level=info msg="Ensure that sandbox e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d in task-service has been cleanup successfully" Feb 13 19:19:55.364819 containerd[1509]: time="2025-02-13T19:19:55.364800114Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:19:55.364853 containerd[1509]: time="2025-02-13T19:19:55.364817607Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:19:55.365292 containerd[1509]: time="2025-02-13T19:19:55.365144512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:19:56.259397 containerd[1509]: time="2025-02-13T19:19:56.259208348Z" level=error msg="Failed to destroy network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.259899 containerd[1509]: time="2025-02-13T19:19:56.259869061Z" level=error msg="encountered an error cleaning up failed sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.260879 containerd[1509]: time="2025-02-13T19:19:56.260020726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.260968 kubelet[2625]: E0213 19:19:56.260336 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.260968 kubelet[2625]: E0213 19:19:56.260404 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:56.260968 kubelet[2625]: E0213 19:19:56.260445 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:56.261097 kubelet[2625]: E0213 19:19:56.260555 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:19:56.261972 containerd[1509]: time="2025-02-13T19:19:56.261890823Z" level=error msg="Failed to destroy network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.262802 containerd[1509]: time="2025-02-13T19:19:56.262645302Z" level=error msg="encountered an error cleaning up failed sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.262802 containerd[1509]: time="2025-02-13T19:19:56.262709894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.263245 kubelet[2625]: E0213 19:19:56.263068 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.263245 kubelet[2625]: E0213 19:19:56.263121 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:56.263245 kubelet[2625]: E0213 19:19:56.263148 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:56.263370 kubelet[2625]: E0213 19:19:56.263190 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:56.268034 containerd[1509]: time="2025-02-13T19:19:56.267815488Z" level=error msg="Failed to destroy network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.268678 containerd[1509]: time="2025-02-13T19:19:56.268651139Z" level=error msg="encountered an error cleaning up failed sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.268825 containerd[1509]: time="2025-02-13T19:19:56.268798126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.269392 kubelet[2625]: E0213 19:19:56.269175 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.269392 kubelet[2625]: E0213 19:19:56.269241 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:56.269392 kubelet[2625]: E0213 19:19:56.269272 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:56.269542 kubelet[2625]: E0213 19:19:56.269324 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:19:56.278892 containerd[1509]: time="2025-02-13T19:19:56.278823623Z" level=error msg="Failed to destroy network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.279954 containerd[1509]: time="2025-02-13T19:19:56.279890199Z" level=error msg="encountered an error cleaning up failed sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.280030 containerd[1509]: time="2025-02-13T19:19:56.279996229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.280254 kubelet[2625]: E0213 19:19:56.280218 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.280326 kubelet[2625]: E0213 19:19:56.280275 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:56.280326 kubelet[2625]: E0213 19:19:56.280295 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:56.280453 kubelet[2625]: E0213 19:19:56.280339 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:19:56.281157 containerd[1509]: time="2025-02-13T19:19:56.281111327Z" level=error msg="Failed to destroy network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.281553 containerd[1509]: time="2025-02-13T19:19:56.281524022Z" level=error msg="encountered an error cleaning up failed sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.281606 containerd[1509]: time="2025-02-13T19:19:56.281582032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.281822 kubelet[2625]: E0213 19:19:56.281786 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.281953 kubelet[2625]: E0213 19:19:56.281846 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:56.281953 kubelet[2625]: E0213 19:19:56.281871 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:56.281953 kubelet[2625]: E0213 19:19:56.281918 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:19:56.287257 containerd[1509]: time="2025-02-13T19:19:56.287224775Z" level=error msg="Failed to destroy network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.287568 containerd[1509]: time="2025-02-13T19:19:56.287542884Z" level=error msg="encountered an error cleaning up failed sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.287609 containerd[1509]: time="2025-02-13T19:19:56.287587347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.287800 kubelet[2625]: E0213 19:19:56.287760 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.287851 kubelet[2625]: E0213 19:19:56.287821 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:56.287880 kubelet[2625]: E0213 19:19:56.287844 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:56.287966 kubelet[2625]: E0213 19:19:56.287893 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:19:56.296917 systemd[1]: run-netns-cni\x2d8180295c\x2d373d\x2d1bfb\x2d1d1b\x2d3d93be0344fc.mount: Deactivated successfully. Feb 13 19:19:56.297068 systemd[1]: run-netns-cni\x2d36880afa\x2d44e2\x2d2c38\x2de820\x2d8483d6d51728.mount: Deactivated successfully. Feb 13 19:19:56.366597 kubelet[2625]: I0213 19:19:56.366550 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd" Feb 13 19:19:56.367236 containerd[1509]: time="2025-02-13T19:19:56.367175480Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:19:56.367561 containerd[1509]: time="2025-02-13T19:19:56.367441140Z" level=info msg="Ensure that sandbox 072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd in task-service has been cleanup successfully" Feb 13 19:19:56.367911 kubelet[2625]: I0213 19:19:56.367864 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d" Feb 13 19:19:56.369006 containerd[1509]: time="2025-02-13T19:19:56.368104748Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:19:56.369054 containerd[1509]: time="2025-02-13T19:19:56.369002667Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:19:56.369104 containerd[1509]: time="2025-02-13T19:19:56.368665202Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:19:56.369436 containerd[1509]: time="2025-02-13T19:19:56.369404383Z" level=info msg="Ensure that sandbox 112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d in task-service has been cleanup successfully" Feb 13 19:19:56.369663 containerd[1509]: time="2025-02-13T19:19:56.369627512Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:19:56.369705 containerd[1509]: time="2025-02-13T19:19:56.369667838Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370266835Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370364188Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370374597Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370507658Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370607406Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:19:56.370950 containerd[1509]: time="2025-02-13T19:19:56.370620470Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:19:56.371194 kubelet[2625]: I0213 19:19:56.370577 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a" Feb 13 19:19:56.371194 kubelet[2625]: E0213 19:19:56.371113 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:56.371525 containerd[1509]: time="2025-02-13T19:19:56.371489575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:2,}" Feb 13 19:19:56.371566 containerd[1509]: time="2025-02-13T19:19:56.371513380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:19:56.371663 systemd[1]: run-netns-cni\x2da9730170\x2da9ef\x2d4cd2\x2df21e\x2d19f6c2c2d055.mount: Deactivated successfully. Feb 13 19:19:56.371909 containerd[1509]: time="2025-02-13T19:19:56.371500916Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:19:56.371909 containerd[1509]: time="2025-02-13T19:19:56.371850644Z" level=info msg="Ensure that sandbox a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a in task-service has been cleanup successfully" Feb 13 19:19:56.374025 containerd[1509]: time="2025-02-13T19:19:56.372126012Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:19:56.374025 containerd[1509]: time="2025-02-13T19:19:56.372147782Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:19:56.374562 containerd[1509]: time="2025-02-13T19:19:56.374531877Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:19:56.374664 containerd[1509]: time="2025-02-13T19:19:56.374633769Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:19:56.374664 containerd[1509]: time="2025-02-13T19:19:56.374650810Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:19:56.375111 containerd[1509]: time="2025-02-13T19:19:56.375086410Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:19:56.375194 containerd[1509]: time="2025-02-13T19:19:56.375162814Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:19:56.375194 containerd[1509]: time="2025-02-13T19:19:56.375171841Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:19:56.375809 containerd[1509]: time="2025-02-13T19:19:56.375765698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:3,}" Feb 13 19:19:56.376506 systemd[1]: run-netns-cni\x2d519bda30\x2d1859\x2d573d\x2d842c\x2d89007d19366b.mount: Deactivated successfully. Feb 13 19:19:56.376631 kubelet[2625]: I0213 19:19:56.376582 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a" Feb 13 19:19:56.376644 systemd[1]: run-netns-cni\x2d0b8ae572\x2d9117\x2dd193\x2dc004\x2d48fab4e8bc50.mount: Deactivated successfully. Feb 13 19:19:56.377506 containerd[1509]: time="2025-02-13T19:19:56.377481104Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:19:56.377676 containerd[1509]: time="2025-02-13T19:19:56.377656384Z" level=info msg="Ensure that sandbox ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a in task-service has been cleanup successfully" Feb 13 19:19:56.378049 containerd[1509]: time="2025-02-13T19:19:56.377846822Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:19:56.378049 containerd[1509]: time="2025-02-13T19:19:56.377866980Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:19:56.378595 containerd[1509]: time="2025-02-13T19:19:56.378429499Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:19:56.378595 containerd[1509]: time="2025-02-13T19:19:56.378509208Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:19:56.378595 containerd[1509]: time="2025-02-13T19:19:56.378518215Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:19:56.379484 kubelet[2625]: I0213 19:19:56.379455 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79" Feb 13 19:19:56.380128 containerd[1509]: time="2025-02-13T19:19:56.378955898Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:19:56.380128 containerd[1509]: time="2025-02-13T19:19:56.380020581Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:19:56.380128 containerd[1509]: time="2025-02-13T19:19:56.380033997Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:19:56.380128 containerd[1509]: time="2025-02-13T19:19:56.379912488Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:19:56.380337 containerd[1509]: time="2025-02-13T19:19:56.380304295Z" level=info msg="Ensure that sandbox 5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79 in task-service has been cleanup successfully" Feb 13 19:19:56.380877 kubelet[2625]: E0213 19:19:56.380859 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:56.381180 containerd[1509]: time="2025-02-13T19:19:56.381147050Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:19:56.381180 containerd[1509]: time="2025-02-13T19:19:56.381169292Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:19:56.382328 containerd[1509]: time="2025-02-13T19:19:56.381387272Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:19:56.382328 containerd[1509]: time="2025-02-13T19:19:56.381438257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:3,}" Feb 13 19:19:56.382328 containerd[1509]: time="2025-02-13T19:19:56.381461492Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:19:56.382328 containerd[1509]: time="2025-02-13T19:19:56.381482371Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:19:56.381669 systemd[1]: run-netns-cni\x2d44720f95\x2d96d5\x2dc970\x2d0639\x2d2e37ef7672f7.mount: Deactivated successfully. Feb 13 19:19:56.382548 kubelet[2625]: I0213 19:19:56.381625 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e" Feb 13 19:19:56.382604 containerd[1509]: time="2025-02-13T19:19:56.382331958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:19:56.382604 containerd[1509]: time="2025-02-13T19:19:56.382543848Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:19:56.382722 containerd[1509]: time="2025-02-13T19:19:56.382675314Z" level=info msg="Ensure that sandbox fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e in task-service has been cleanup successfully" Feb 13 19:19:56.383047 containerd[1509]: time="2025-02-13T19:19:56.382991018Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:19:56.383047 containerd[1509]: time="2025-02-13T19:19:56.383023870Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:19:56.383338 containerd[1509]: time="2025-02-13T19:19:56.383280423Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:19:56.383388 containerd[1509]: time="2025-02-13T19:19:56.383359111Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:19:56.383388 containerd[1509]: time="2025-02-13T19:19:56.383368929Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:19:56.383873 containerd[1509]: time="2025-02-13T19:19:56.383845255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:2,}" Feb 13 19:19:56.385314 systemd[1]: run-netns-cni\x2dee8f20a8\x2db2bd\x2d9df6\x2d4dbf\x2d968103b44880.mount: Deactivated successfully. Feb 13 19:19:56.385440 systemd[1]: run-netns-cni\x2df0ef54f0\x2db5bf\x2d022b\x2db0dc\x2de53fc525a81d.mount: Deactivated successfully. Feb 13 19:19:56.578849 containerd[1509]: time="2025-02-13T19:19:56.578660796Z" level=error msg="Failed to destroy network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.580154 containerd[1509]: time="2025-02-13T19:19:56.580110833Z" level=error msg="encountered an error cleaning up failed sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.580271 containerd[1509]: time="2025-02-13T19:19:56.580190843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.580543 kubelet[2625]: E0213 19:19:56.580505 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.580665 kubelet[2625]: E0213 19:19:56.580635 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:56.580720 kubelet[2625]: E0213 19:19:56.580670 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:56.580811 kubelet[2625]: E0213 19:19:56.580717 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:19:56.581042 containerd[1509]: time="2025-02-13T19:19:56.581008121Z" level=error msg="Failed to destroy network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.581543 containerd[1509]: time="2025-02-13T19:19:56.581516127Z" level=error msg="encountered an error cleaning up failed sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.581677 containerd[1509]: time="2025-02-13T19:19:56.581654246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.583233 kubelet[2625]: E0213 19:19:56.583191 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.583301 kubelet[2625]: E0213 19:19:56.583266 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:56.583301 kubelet[2625]: E0213 19:19:56.583288 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:56.583353 kubelet[2625]: E0213 19:19:56.583326 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:19:56.585594 containerd[1509]: time="2025-02-13T19:19:56.585558519Z" level=error msg="Failed to destroy network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.586066 containerd[1509]: time="2025-02-13T19:19:56.586043392Z" level=error msg="encountered an error cleaning up failed sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.586169 containerd[1509]: time="2025-02-13T19:19:56.586150714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.586546 kubelet[2625]: E0213 19:19:56.586510 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.586598 kubelet[2625]: E0213 19:19:56.586555 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:56.586598 kubelet[2625]: E0213 19:19:56.586576 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:56.586656 kubelet[2625]: E0213 19:19:56.586613 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:19:56.603238 containerd[1509]: time="2025-02-13T19:19:56.603192679Z" level=error msg="Failed to destroy network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.603816 containerd[1509]: time="2025-02-13T19:19:56.603793519Z" level=error msg="encountered an error cleaning up failed sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.603981 containerd[1509]: time="2025-02-13T19:19:56.603925578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.604763 kubelet[2625]: E0213 19:19:56.604348 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.604763 kubelet[2625]: E0213 19:19:56.604406 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:56.604763 kubelet[2625]: E0213 19:19:56.604434 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:56.604876 kubelet[2625]: E0213 19:19:56.604493 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:56.607528 containerd[1509]: time="2025-02-13T19:19:56.607449436Z" level=error msg="Failed to destroy network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.608155 containerd[1509]: time="2025-02-13T19:19:56.608066427Z" level=error msg="encountered an error cleaning up failed sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.608155 containerd[1509]: time="2025-02-13T19:19:56.608141167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.608537 kubelet[2625]: E0213 19:19:56.608402 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.608537 kubelet[2625]: E0213 19:19:56.608506 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:56.608537 kubelet[2625]: E0213 19:19:56.608527 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:56.608663 kubelet[2625]: E0213 19:19:56.608567 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:19:56.614642 containerd[1509]: time="2025-02-13T19:19:56.614598282Z" level=error msg="Failed to destroy network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.614994 containerd[1509]: time="2025-02-13T19:19:56.614957628Z" level=error msg="encountered an error cleaning up failed sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.615056 containerd[1509]: time="2025-02-13T19:19:56.615024685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.615271 kubelet[2625]: E0213 19:19:56.615226 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:56.615337 kubelet[2625]: E0213 19:19:56.615292 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:56.615337 kubelet[2625]: E0213 19:19:56.615312 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:56.615396 kubelet[2625]: E0213 19:19:56.615356 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:19:57.385235 kubelet[2625]: I0213 19:19:57.385196 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb" Feb 13 19:19:57.385787 containerd[1509]: time="2025-02-13T19:19:57.385752909Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:19:57.386143 containerd[1509]: time="2025-02-13T19:19:57.386075076Z" level=info msg="Ensure that sandbox 8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb in task-service has been cleanup successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.386398945Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.386417780Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.386699319Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.386763961Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.386773720Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.387088932Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.387173822Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.387182629Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.387815058Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388000587Z" level=info msg="Ensure that sandbox db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453 in task-service has been cleanup successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388179844Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388190434Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388643215Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388723426Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388734176Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:19:57.388953 containerd[1509]: time="2025-02-13T19:19:57.388821781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:3,}" Feb 13 19:19:57.389480 kubelet[2625]: I0213 19:19:57.387454 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453" Feb 13 19:19:57.389529 containerd[1509]: time="2025-02-13T19:19:57.389029181Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:19:57.389726 systemd[1]: run-netns-cni\x2d28dfde49\x2de62d\x2d0f88\x2dbff7\x2d4628293cfd56.mount: Deactivated successfully. Feb 13 19:19:57.390148 kubelet[2625]: I0213 19:19:57.389841 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244" Feb 13 19:19:57.392198 kubelet[2625]: I0213 19:19:57.392172 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f" Feb 13 19:19:57.394138 kubelet[2625]: I0213 19:19:57.394109 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b" Feb 13 19:19:57.394890 systemd[1]: run-netns-cni\x2dba352f88\x2d8047\x2d6908\x2d8e58\x2ddf252caa7a5f.mount: Deactivated successfully. Feb 13 19:19:57.396374 kubelet[2625]: I0213 19:19:57.396359 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155" Feb 13 19:19:57.408596 containerd[1509]: time="2025-02-13T19:19:57.389153595Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:19:57.408596 containerd[1509]: time="2025-02-13T19:19:57.408583374Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:19:57.408770 containerd[1509]: time="2025-02-13T19:19:57.390209591Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:19:57.408770 containerd[1509]: time="2025-02-13T19:19:57.392983347Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:19:57.408770 containerd[1509]: time="2025-02-13T19:19:57.394528934Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:19:57.409192 containerd[1509]: time="2025-02-13T19:19:57.408863592Z" level=info msg="Ensure that sandbox 09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244 in task-service has been cleanup successfully" Feb 13 19:19:57.409192 containerd[1509]: time="2025-02-13T19:19:57.396768395Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:19:57.409192 containerd[1509]: time="2025-02-13T19:19:57.409011190Z" level=info msg="Ensure that sandbox 3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155 in task-service has been cleanup successfully" Feb 13 19:19:57.409192 containerd[1509]: time="2025-02-13T19:19:57.409135783Z" level=info msg="Ensure that sandbox 6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b in task-service has been cleanup successfully" Feb 13 19:19:57.409529 containerd[1509]: time="2025-02-13T19:19:57.409497614Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:19:57.409529 containerd[1509]: time="2025-02-13T19:19:57.409516720Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:19:57.409592 containerd[1509]: time="2025-02-13T19:19:57.408869553Z" level=info msg="Ensure that sandbox aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f in task-service has been cleanup successfully" Feb 13 19:19:57.409592 containerd[1509]: time="2025-02-13T19:19:57.409577715Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:19:57.409635 containerd[1509]: time="2025-02-13T19:19:57.409596490Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:19:57.409671 containerd[1509]: time="2025-02-13T19:19:57.409596711Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:19:57.409702 containerd[1509]: time="2025-02-13T19:19:57.409670840Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:19:57.409748 containerd[1509]: time="2025-02-13T19:19:57.409724971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:19:57.410093 containerd[1509]: time="2025-02-13T19:19:57.410052848Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:19:57.410204 containerd[1509]: time="2025-02-13T19:19:57.410104936Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:19:57.410204 containerd[1509]: time="2025-02-13T19:19:57.410150072Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:19:57.410204 containerd[1509]: time="2025-02-13T19:19:57.410163747Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:19:57.410204 containerd[1509]: time="2025-02-13T19:19:57.410199214Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:19:57.410335 containerd[1509]: time="2025-02-13T19:19:57.410206968Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:19:57.410335 containerd[1509]: time="2025-02-13T19:19:57.410302307Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:19:57.410335 containerd[1509]: time="2025-02-13T19:19:57.410313599Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:19:57.410399 containerd[1509]: time="2025-02-13T19:19:57.410361188Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:19:57.410455 containerd[1509]: time="2025-02-13T19:19:57.410430909Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:19:57.410726 containerd[1509]: time="2025-02-13T19:19:57.410214613Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:19:57.410726 containerd[1509]: time="2025-02-13T19:19:57.410613683Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:19:57.410984 containerd[1509]: time="2025-02-13T19:19:57.410633530Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:19:57.410984 containerd[1509]: time="2025-02-13T19:19:57.410897547Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:19:57.410984 containerd[1509]: time="2025-02-13T19:19:57.410910662Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411183976Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411254608Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411262975Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411296958Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411339338Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411353945Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411354637Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411361810Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:19:57.411650 containerd[1509]: time="2025-02-13T19:19:57.411467139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:19:57.411899 containerd[1509]: time="2025-02-13T19:19:57.411762063Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:19:57.411899 containerd[1509]: time="2025-02-13T19:19:57.411841142Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:19:57.411899 containerd[1509]: time="2025-02-13T19:19:57.411852072Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:19:57.412217 kubelet[2625]: E0213 19:19:57.412101 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:57.412138 systemd[1]: run-netns-cni\x2d267095b0\x2d8b41\x2d9222\x2d0842\x2d98eda006fcfe.mount: Deactivated successfully. Feb 13 19:19:57.412239 systemd[1]: run-netns-cni\x2d6e66aebf\x2d3203\x2df07f\x2dc178\x2d89a1890e0864.mount: Deactivated successfully. Feb 13 19:19:57.412332 systemd[1]: run-netns-cni\x2d31f0fa78\x2daa7b\x2d3507\x2d072a\x2d05e37b3e0b62.mount: Deactivated successfully. Feb 13 19:19:57.412405 systemd[1]: run-netns-cni\x2da7692071\x2d09ef\x2d5d7b\x2de407\x2d8cacea7c48b2.mount: Deactivated successfully. Feb 13 19:19:57.412865 containerd[1509]: time="2025-02-13T19:19:57.412530818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:4,}" Feb 13 19:19:57.412865 containerd[1509]: time="2025-02-13T19:19:57.412555865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:3,}" Feb 13 19:19:57.412865 containerd[1509]: time="2025-02-13T19:19:57.412741284Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:19:57.412865 containerd[1509]: time="2025-02-13T19:19:57.412812438Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:19:57.412865 containerd[1509]: time="2025-02-13T19:19:57.412821505Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:19:57.413247 containerd[1509]: time="2025-02-13T19:19:57.413218833Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:19:57.413392 containerd[1509]: time="2025-02-13T19:19:57.413288163Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:19:57.413392 containerd[1509]: time="2025-02-13T19:19:57.413301097Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:19:57.413664 kubelet[2625]: E0213 19:19:57.413540 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:57.413787 containerd[1509]: time="2025-02-13T19:19:57.413765040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:4,}" Feb 13 19:19:58.531615 containerd[1509]: time="2025-02-13T19:19:58.531459444Z" level=error msg="Failed to destroy network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.532832 containerd[1509]: time="2025-02-13T19:19:58.532684998Z" level=error msg="encountered an error cleaning up failed sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.532832 containerd[1509]: time="2025-02-13T19:19:58.532745191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.533131 kubelet[2625]: E0213 19:19:58.532962 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.534633 kubelet[2625]: E0213 19:19:58.534480 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:58.534633 kubelet[2625]: E0213 19:19:58.534526 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:19:58.541883 kubelet[2625]: E0213 19:19:58.541716 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:19:58.546341 containerd[1509]: time="2025-02-13T19:19:58.544146809Z" level=error msg="Failed to destroy network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.546341 containerd[1509]: time="2025-02-13T19:19:58.544755594Z" level=error msg="encountered an error cleaning up failed sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.546341 containerd[1509]: time="2025-02-13T19:19:58.544824815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.546557 kubelet[2625]: E0213 19:19:58.545125 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.546557 kubelet[2625]: E0213 19:19:58.545207 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:58.546557 kubelet[2625]: E0213 19:19:58.545235 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:19:58.546783 kubelet[2625]: E0213 19:19:58.545289 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:19:58.554891 containerd[1509]: time="2025-02-13T19:19:58.554720351Z" level=error msg="Failed to destroy network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.555464 containerd[1509]: time="2025-02-13T19:19:58.555429424Z" level=error msg="encountered an error cleaning up failed sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.555605 containerd[1509]: time="2025-02-13T19:19:58.555581140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.556147 kubelet[2625]: E0213 19:19:58.555949 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.556147 kubelet[2625]: E0213 19:19:58.556016 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:58.556147 kubelet[2625]: E0213 19:19:58.556042 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:19:58.556315 kubelet[2625]: E0213 19:19:58.556091 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:19:58.562346 containerd[1509]: time="2025-02-13T19:19:58.561789883Z" level=error msg="Failed to destroy network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.562346 containerd[1509]: time="2025-02-13T19:19:58.562186940Z" level=error msg="encountered an error cleaning up failed sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.562346 containerd[1509]: time="2025-02-13T19:19:58.562237776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.562566 kubelet[2625]: E0213 19:19:58.562442 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.562566 kubelet[2625]: E0213 19:19:58.562498 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:58.562566 kubelet[2625]: E0213 19:19:58.562522 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:19:58.562681 kubelet[2625]: E0213 19:19:58.562597 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:19:58.564161 containerd[1509]: time="2025-02-13T19:19:58.563566264Z" level=error msg="Failed to destroy network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.564161 containerd[1509]: time="2025-02-13T19:19:58.564048040Z" level=error msg="encountered an error cleaning up failed sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.564161 containerd[1509]: time="2025-02-13T19:19:58.564112641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.564601 kubelet[2625]: E0213 19:19:58.564577 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.564655 kubelet[2625]: E0213 19:19:58.564604 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:58.564655 kubelet[2625]: E0213 19:19:58.564620 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:19:58.564655 kubelet[2625]: E0213 19:19:58.564646 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:19:58.567490 containerd[1509]: time="2025-02-13T19:19:58.567441981Z" level=error msg="Failed to destroy network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.567867 containerd[1509]: time="2025-02-13T19:19:58.567830942Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.567911 containerd[1509]: time="2025-02-13T19:19:58.567887169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.568296 kubelet[2625]: E0213 19:19:58.568254 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:58.568359 kubelet[2625]: E0213 19:19:58.568331 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:58.568394 kubelet[2625]: E0213 19:19:58.568359 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:19:58.568464 kubelet[2625]: E0213 19:19:58.568430 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:19:59.380053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9-shm.mount: Deactivated successfully. Feb 13 19:19:59.380161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008-shm.mount: Deactivated successfully. Feb 13 19:19:59.380238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604-shm.mount: Deactivated successfully. Feb 13 19:19:59.433154 kubelet[2625]: I0213 19:19:59.433104 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9" Feb 13 19:19:59.434784 kubelet[2625]: I0213 19:19:59.434753 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604" Feb 13 19:19:59.436641 kubelet[2625]: I0213 19:19:59.436617 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439061226Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439118714Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439250051Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439314382Z" level=info msg="Ensure that sandbox 428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9 in task-service has been cleanup successfully" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439366952Z" level=info msg="Ensure that sandbox a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604 in task-service has been cleanup successfully" Feb 13 19:19:59.439563 containerd[1509]: time="2025-02-13T19:19:59.439510962Z" level=info msg="Ensure that sandbox 9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db in task-service has been cleanup successfully" Feb 13 19:19:59.440119 kubelet[2625]: I0213 19:19:59.440065 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008" Feb 13 19:19:59.440534 containerd[1509]: time="2025-02-13T19:19:59.440489722Z" level=info msg="TearDown network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" successfully" Feb 13 19:19:59.440534 containerd[1509]: time="2025-02-13T19:19:59.440512074Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" returns successfully" Feb 13 19:19:59.440709 containerd[1509]: time="2025-02-13T19:19:59.440646256Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:19:59.440809 containerd[1509]: time="2025-02-13T19:19:59.440786840Z" level=info msg="Ensure that sandbox 809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008 in task-service has been cleanup successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441428357Z" level=info msg="TearDown network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441451730Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" returns successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441460557Z" level=info msg="TearDown network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441508837Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" returns successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441533254Z" level=info msg="TearDown network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" successfully" Feb 13 19:19:59.441651 containerd[1509]: time="2025-02-13T19:19:59.441544996Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" returns successfully" Feb 13 19:19:59.442775 containerd[1509]: time="2025-02-13T19:19:59.442746134Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:19:59.442859 containerd[1509]: time="2025-02-13T19:19:59.442833569Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:19:59.442859 containerd[1509]: time="2025-02-13T19:19:59.442849158Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:19:59.443023 containerd[1509]: time="2025-02-13T19:19:59.442978150Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:19:59.443100 containerd[1509]: time="2025-02-13T19:19:59.443069041Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:19:59.443100 containerd[1509]: time="2025-02-13T19:19:59.443089810Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:19:59.443174 containerd[1509]: time="2025-02-13T19:19:59.443137069Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:19:59.443259 containerd[1509]: time="2025-02-13T19:19:59.443234031Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:19:59.443259 containerd[1509]: time="2025-02-13T19:19:59.443248569Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:19:59.443331 containerd[1509]: time="2025-02-13T19:19:59.443295858Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:19:59.443398 containerd[1509]: time="2025-02-13T19:19:59.443379354Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:19:59.443398 containerd[1509]: time="2025-02-13T19:19:59.443393942Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:19:59.444127 containerd[1509]: time="2025-02-13T19:19:59.443853666Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:19:59.444127 containerd[1509]: time="2025-02-13T19:19:59.443972550Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:19:59.444127 containerd[1509]: time="2025-02-13T19:19:59.443984272Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:19:59.444201 systemd[1]: run-netns-cni\x2df5d3a580\x2df1fe\x2d327c\x2d66bd\x2df22e65de496d.mount: Deactivated successfully. Feb 13 19:19:59.444392 systemd[1]: run-netns-cni\x2d10f7692c\x2d4437\x2da7d9\x2d176a\x2dec468e1927da.mount: Deactivated successfully. Feb 13 19:19:59.444507 systemd[1]: run-netns-cni\x2d6e059229\x2dd3a8\x2dc0c7\x2d9e19\x2d3931e631ccee.mount: Deactivated successfully. Feb 13 19:19:59.445108 containerd[1509]: time="2025-02-13T19:19:59.445057900Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:19:59.445181 containerd[1509]: time="2025-02-13T19:19:59.445152037Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:19:59.445181 containerd[1509]: time="2025-02-13T19:19:59.445169570Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:19:59.445509 containerd[1509]: time="2025-02-13T19:19:59.445275569Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:19:59.445509 containerd[1509]: time="2025-02-13T19:19:59.445355740Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:19:59.445509 containerd[1509]: time="2025-02-13T19:19:59.445364807Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446109798Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446155263Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446205527Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446219423Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446229182Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446240303Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:19:59.446332 containerd[1509]: time="2025-02-13T19:19:59.446316787Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:19:59.446914 containerd[1509]: time="2025-02-13T19:19:59.446478310Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:19:59.446914 containerd[1509]: time="2025-02-13T19:19:59.446493449Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:19:59.447421 containerd[1509]: time="2025-02-13T19:19:59.447219985Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:19:59.447421 containerd[1509]: time="2025-02-13T19:19:59.447297170Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:19:59.447421 containerd[1509]: time="2025-02-13T19:19:59.447308821Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:19:59.447421 containerd[1509]: time="2025-02-13T19:19:59.447381509Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:19:59.447572 containerd[1509]: time="2025-02-13T19:19:59.447535878Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:19:59.447572 containerd[1509]: time="2025-02-13T19:19:59.447554473Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:19:59.447829 kubelet[2625]: E0213 19:19:59.447793 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:59.448146 containerd[1509]: time="2025-02-13T19:19:59.448115027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:4,}" Feb 13 19:19:59.448702 containerd[1509]: time="2025-02-13T19:19:59.448673999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:19:59.448855 containerd[1509]: time="2025-02-13T19:19:59.448830382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:19:59.448910 containerd[1509]: time="2025-02-13T19:19:59.448677124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:4,}" Feb 13 19:19:59.449468 systemd[1]: run-netns-cni\x2d2f8c0c28\x2d51f8\x2da9d6\x2de910\x2d801bad9302e8.mount: Deactivated successfully. Feb 13 19:19:59.450327 kubelet[2625]: I0213 19:19:59.449872 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3" Feb 13 19:19:59.450834 containerd[1509]: time="2025-02-13T19:19:59.450804514Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:19:59.451285 containerd[1509]: time="2025-02-13T19:19:59.451073279Z" level=info msg="Ensure that sandbox dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3 in task-service has been cleanup successfully" Feb 13 19:19:59.451681 containerd[1509]: time="2025-02-13T19:19:59.451568690Z" level=info msg="TearDown network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" successfully" Feb 13 19:19:59.451681 containerd[1509]: time="2025-02-13T19:19:59.451589619Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" returns successfully" Feb 13 19:19:59.453590 systemd[1]: run-netns-cni\x2d96c9b7c7\x2d601a\x2d7312\x2d384a\x2db6572cb32211.mount: Deactivated successfully. Feb 13 19:19:59.454273 containerd[1509]: time="2025-02-13T19:19:59.453975986Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:19:59.454273 containerd[1509]: time="2025-02-13T19:19:59.454074892Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:19:59.454273 containerd[1509]: time="2025-02-13T19:19:59.454085893Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:19:59.455535 containerd[1509]: time="2025-02-13T19:19:59.455094759Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:19:59.455535 containerd[1509]: time="2025-02-13T19:19:59.455179599Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:19:59.455535 containerd[1509]: time="2025-02-13T19:19:59.455190349Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:19:59.456335 containerd[1509]: time="2025-02-13T19:19:59.456254009Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:19:59.456335 containerd[1509]: time="2025-02-13T19:19:59.456323439Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:19:59.456335 containerd[1509]: time="2025-02-13T19:19:59.456332105Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:19:59.457130 containerd[1509]: time="2025-02-13T19:19:59.457072156Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:19:59.457169 containerd[1509]: time="2025-02-13T19:19:59.457140285Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:19:59.457169 containerd[1509]: time="2025-02-13T19:19:59.457150414Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:19:59.457667 containerd[1509]: time="2025-02-13T19:19:59.457636558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:5,}" Feb 13 19:19:59.458506 kubelet[2625]: I0213 19:19:59.458459 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f" Feb 13 19:19:59.459844 containerd[1509]: time="2025-02-13T19:19:59.459573980Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:19:59.460995 containerd[1509]: time="2025-02-13T19:19:59.460969875Z" level=info msg="Ensure that sandbox 22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f in task-service has been cleanup successfully" Feb 13 19:19:59.463873 containerd[1509]: time="2025-02-13T19:19:59.461635676Z" level=info msg="TearDown network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" successfully" Feb 13 19:19:59.463873 containerd[1509]: time="2025-02-13T19:19:59.461657607Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" returns successfully" Feb 13 19:19:59.464456 containerd[1509]: time="2025-02-13T19:19:59.464131418Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:19:59.464456 containerd[1509]: time="2025-02-13T19:19:59.464262595Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:19:59.464456 containerd[1509]: time="2025-02-13T19:19:59.464273846Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:19:59.465365 systemd[1]: run-netns-cni\x2d85d80400\x2da447\x2dc921\x2dfd7c\x2d3578cfc2c067.mount: Deactivated successfully. Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.466446891Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.466555795Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.466570853Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.470267283Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.470417696Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.470434407Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:19:59.470985 containerd[1509]: time="2025-02-13T19:19:59.470923237Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:19:59.471219 containerd[1509]: time="2025-02-13T19:19:59.471013726Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:19:59.471219 containerd[1509]: time="2025-02-13T19:19:59.471036589Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:19:59.471639 kubelet[2625]: E0213 19:19:59.471596 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:19:59.474121 containerd[1509]: time="2025-02-13T19:19:59.474089198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:5,}" Feb 13 19:19:59.809170 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:49456.service - OpenSSH per-connection server daemon (10.0.0.1:49456). Feb 13 19:20:00.001249 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 49456 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:00.003476 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:00.013850 systemd-logind[1499]: New session 9 of user core. Feb 13 19:20:00.020150 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.145232799Z" level=error msg="Failed to destroy network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.145882390Z" level=error msg="encountered an error cleaning up failed sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.146253076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.147179798Z" level=error msg="Failed to destroy network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.147667155Z" level=error msg="encountered an error cleaning up failed sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148019 containerd[1509]: time="2025-02-13T19:20:00.147740463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148534 kubelet[2625]: E0213 19:20:00.146705 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148534 kubelet[2625]: E0213 19:20:00.146783 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:20:00.148534 kubelet[2625]: E0213 19:20:00.146804 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:20:00.148534 kubelet[2625]: E0213 19:20:00.147948 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.148832 kubelet[2625]: E0213 19:20:00.147981 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:20:00.148832 kubelet[2625]: E0213 19:20:00.147998 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:20:00.148832 kubelet[2625]: E0213 19:20:00.148041 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:20:00.148926 kubelet[2625]: E0213 19:20:00.148116 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:20:00.178176 sshd[4449]: Connection closed by 10.0.0.1 port 49456 Feb 13 19:20:00.177577 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:00.178547 containerd[1509]: time="2025-02-13T19:20:00.176876669Z" level=error msg="Failed to destroy network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.178547 containerd[1509]: time="2025-02-13T19:20:00.177881769Z" level=error msg="encountered an error cleaning up failed sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.180964 containerd[1509]: time="2025-02-13T19:20:00.179295076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.180964 containerd[1509]: time="2025-02-13T19:20:00.180066435Z" level=error msg="Failed to destroy network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.180964 containerd[1509]: time="2025-02-13T19:20:00.180507915Z" level=error msg="encountered an error cleaning up failed sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.180964 containerd[1509]: time="2025-02-13T19:20:00.180557859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.180898 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:49456.service: Deactivated successfully. Feb 13 19:20:00.181195 kubelet[2625]: E0213 19:20:00.180103 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.181195 kubelet[2625]: E0213 19:20:00.180184 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:20:00.181195 kubelet[2625]: E0213 19:20:00.180204 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:20:00.181276 kubelet[2625]: E0213 19:20:00.180252 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:20:00.181276 kubelet[2625]: E0213 19:20:00.180699 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.181276 kubelet[2625]: E0213 19:20:00.180738 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:20:00.181361 kubelet[2625]: E0213 19:20:00.180760 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:20:00.181361 kubelet[2625]: E0213 19:20:00.180802 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:20:00.181743 containerd[1509]: time="2025-02-13T19:20:00.181584409Z" level=error msg="Failed to destroy network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.182047 containerd[1509]: time="2025-02-13T19:20:00.182015459Z" level=error msg="encountered an error cleaning up failed sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.182294 containerd[1509]: time="2025-02-13T19:20:00.182115497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.182332 kubelet[2625]: E0213 19:20:00.182205 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.182332 kubelet[2625]: E0213 19:20:00.182227 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:20:00.182332 kubelet[2625]: E0213 19:20:00.182241 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:20:00.182423 kubelet[2625]: E0213 19:20:00.182264 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:20:00.183635 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:20:00.185700 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:20:00.186796 systemd-logind[1499]: Removed session 9. Feb 13 19:20:00.190725 containerd[1509]: time="2025-02-13T19:20:00.190681318Z" level=error msg="Failed to destroy network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.191444 containerd[1509]: time="2025-02-13T19:20:00.191410659Z" level=error msg="encountered an error cleaning up failed sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.191487 containerd[1509]: time="2025-02-13T19:20:00.191469791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.191674 kubelet[2625]: E0213 19:20:00.191639 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:00.191717 kubelet[2625]: E0213 19:20:00.191691 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:20:00.191747 kubelet[2625]: E0213 19:20:00.191710 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:20:00.191772 kubelet[2625]: E0213 19:20:00.191747 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:20:00.462507 kubelet[2625]: I0213 19:20:00.462476 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed" Feb 13 19:20:00.463242 containerd[1509]: time="2025-02-13T19:20:00.463215887Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" Feb 13 19:20:00.463542 containerd[1509]: time="2025-02-13T19:20:00.463523916Z" level=info msg="Ensure that sandbox 0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed in task-service has been cleanup successfully" Feb 13 19:20:00.463815 containerd[1509]: time="2025-02-13T19:20:00.463800697Z" level=info msg="TearDown network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" successfully" Feb 13 19:20:00.463872 containerd[1509]: time="2025-02-13T19:20:00.463861190Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" returns successfully" Feb 13 19:20:00.464719 containerd[1509]: time="2025-02-13T19:20:00.464680079Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:20:00.464847 containerd[1509]: time="2025-02-13T19:20:00.464793884Z" level=info msg="TearDown network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" successfully" Feb 13 19:20:00.464847 containerd[1509]: time="2025-02-13T19:20:00.464810204Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" returns successfully" Feb 13 19:20:00.466088 containerd[1509]: time="2025-02-13T19:20:00.465915612Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:20:00.466088 containerd[1509]: time="2025-02-13T19:20:00.466009568Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:20:00.466088 containerd[1509]: time="2025-02-13T19:20:00.466018976Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:20:00.466142 systemd[1]: run-netns-cni\x2d73352415\x2d0074\x2d2f11\x2d078d\x2d5e7c320c8e2c.mount: Deactivated successfully. Feb 13 19:20:00.466774 containerd[1509]: time="2025-02-13T19:20:00.466648169Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:20:00.466774 containerd[1509]: time="2025-02-13T19:20:00.466737016Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:20:00.466774 containerd[1509]: time="2025-02-13T19:20:00.466746203Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:20:00.467110 containerd[1509]: time="2025-02-13T19:20:00.467063991Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:20:00.467569 containerd[1509]: time="2025-02-13T19:20:00.467526350Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:20:00.467637 containerd[1509]: time="2025-02-13T19:20:00.467621919Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:20:00.469495 containerd[1509]: time="2025-02-13T19:20:00.468066365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:5,}" Feb 13 19:20:00.469495 containerd[1509]: time="2025-02-13T19:20:00.469066975Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" Feb 13 19:20:00.469495 containerd[1509]: time="2025-02-13T19:20:00.469264216Z" level=info msg="Ensure that sandbox 0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4 in task-service has been cleanup successfully" Feb 13 19:20:00.469495 containerd[1509]: time="2025-02-13T19:20:00.469469482Z" level=info msg="TearDown network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" successfully" Feb 13 19:20:00.469495 containerd[1509]: time="2025-02-13T19:20:00.469480783Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" returns successfully" Feb 13 19:20:00.469630 kubelet[2625]: I0213 19:20:00.468630 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4" Feb 13 19:20:00.471863 systemd[1]: run-netns-cni\x2db9d7fd99\x2dba94\x2d4dbc\x2dadd4\x2ddf5d7d08e789.mount: Deactivated successfully. Feb 13 19:20:00.472248 containerd[1509]: time="2025-02-13T19:20:00.472140373Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:20:00.472248 containerd[1509]: time="2025-02-13T19:20:00.472213169Z" level=info msg="TearDown network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" successfully" Feb 13 19:20:00.472248 containerd[1509]: time="2025-02-13T19:20:00.472222457Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" returns successfully" Feb 13 19:20:00.472682 containerd[1509]: time="2025-02-13T19:20:00.472657925Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:20:00.472828 containerd[1509]: time="2025-02-13T19:20:00.472783051Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:20:00.472828 containerd[1509]: time="2025-02-13T19:20:00.472799572Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:20:00.473783 containerd[1509]: time="2025-02-13T19:20:00.473721084Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:20:00.474352 containerd[1509]: time="2025-02-13T19:20:00.474135402Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:20:00.474352 containerd[1509]: time="2025-02-13T19:20:00.474156944Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:20:00.475800 containerd[1509]: time="2025-02-13T19:20:00.474825058Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:20:00.475800 containerd[1509]: time="2025-02-13T19:20:00.474920448Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:20:00.475800 containerd[1509]: time="2025-02-13T19:20:00.474978257Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:20:00.485203 containerd[1509]: time="2025-02-13T19:20:00.485073703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:20:00.485959 kubelet[2625]: I0213 19:20:00.485849 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1" Feb 13 19:20:00.486353 containerd[1509]: time="2025-02-13T19:20:00.486313714Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" Feb 13 19:20:00.486559 containerd[1509]: time="2025-02-13T19:20:00.486529519Z" level=info msg="Ensure that sandbox 6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1 in task-service has been cleanup successfully" Feb 13 19:20:00.488950 containerd[1509]: time="2025-02-13T19:20:00.486746608Z" level=info msg="TearDown network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" successfully" Feb 13 19:20:00.488950 containerd[1509]: time="2025-02-13T19:20:00.486764802Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" returns successfully" Feb 13 19:20:00.489342 containerd[1509]: time="2025-02-13T19:20:00.489312030Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:20:00.489441 containerd[1509]: time="2025-02-13T19:20:00.489410335Z" level=info msg="TearDown network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" successfully" Feb 13 19:20:00.489441 containerd[1509]: time="2025-02-13T19:20:00.489437075Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" returns successfully" Feb 13 19:20:00.489534 systemd[1]: run-netns-cni\x2d354a76d2\x2d6f2c\x2d1a60\x2d4efe\x2deeb4c3106df5.mount: Deactivated successfully. Feb 13 19:20:00.489907 containerd[1509]: time="2025-02-13T19:20:00.489835725Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:20:00.490071 containerd[1509]: time="2025-02-13T19:20:00.489920824Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:20:00.490071 containerd[1509]: time="2025-02-13T19:20:00.489944569Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:20:00.490404 containerd[1509]: time="2025-02-13T19:20:00.490342197Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:20:00.490446 containerd[1509]: time="2025-02-13T19:20:00.490437335Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:20:00.490469 containerd[1509]: time="2025-02-13T19:20:00.490447614Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:20:00.491351 containerd[1509]: time="2025-02-13T19:20:00.490844701Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:20:00.491351 containerd[1509]: time="2025-02-13T19:20:00.490946913Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:20:00.491351 containerd[1509]: time="2025-02-13T19:20:00.490959797Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:20:00.491485 containerd[1509]: time="2025-02-13T19:20:00.491400115Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:20:00.491485 containerd[1509]: time="2025-02-13T19:20:00.491473062Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:20:00.491485 containerd[1509]: time="2025-02-13T19:20:00.491482330Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:20:00.492213 containerd[1509]: time="2025-02-13T19:20:00.492173669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:6,}" Feb 13 19:20:00.492613 kubelet[2625]: I0213 19:20:00.492574 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d" Feb 13 19:20:00.493032 containerd[1509]: time="2025-02-13T19:20:00.493001775Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" Feb 13 19:20:00.493207 containerd[1509]: time="2025-02-13T19:20:00.493179249Z" level=info msg="Ensure that sandbox d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d in task-service has been cleanup successfully" Feb 13 19:20:00.493888 containerd[1509]: time="2025-02-13T19:20:00.493371831Z" level=info msg="TearDown network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" successfully" Feb 13 19:20:00.493888 containerd[1509]: time="2025-02-13T19:20:00.493398070Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" returns successfully" Feb 13 19:20:00.494391 containerd[1509]: time="2025-02-13T19:20:00.494164240Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:20:00.494391 containerd[1509]: time="2025-02-13T19:20:00.494240514Z" level=info msg="TearDown network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" successfully" Feb 13 19:20:00.494391 containerd[1509]: time="2025-02-13T19:20:00.494259249Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" returns successfully" Feb 13 19:20:00.495455 containerd[1509]: time="2025-02-13T19:20:00.495263336Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:20:00.495455 containerd[1509]: time="2025-02-13T19:20:00.495367463Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:20:00.495455 containerd[1509]: time="2025-02-13T19:20:00.495402599Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:20:00.495972 systemd[1]: run-netns-cni\x2d2e7a74b0\x2dfb22\x2d8cfe\x2dfafc\x2d92fb5f571f68.mount: Deactivated successfully. Feb 13 19:20:00.496405 kubelet[2625]: I0213 19:20:00.496365 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092" Feb 13 19:20:00.497512 containerd[1509]: time="2025-02-13T19:20:00.496570554Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:20:00.497512 containerd[1509]: time="2025-02-13T19:20:00.496666033Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:20:00.497512 containerd[1509]: time="2025-02-13T19:20:00.496684157Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:20:00.497512 containerd[1509]: time="2025-02-13T19:20:00.497006944Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" Feb 13 19:20:00.497512 containerd[1509]: time="2025-02-13T19:20:00.497284526Z" level=info msg="Ensure that sandbox b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092 in task-service has been cleanup successfully" Feb 13 19:20:00.497738 containerd[1509]: time="2025-02-13T19:20:00.497716298Z" level=info msg="TearDown network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" successfully" Feb 13 19:20:00.497808 containerd[1509]: time="2025-02-13T19:20:00.497793833Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" returns successfully" Feb 13 19:20:00.498158 containerd[1509]: time="2025-02-13T19:20:00.498138000Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:20:00.498321 containerd[1509]: time="2025-02-13T19:20:00.498300976Z" level=info msg="TearDown network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" successfully" Feb 13 19:20:00.498399 containerd[1509]: time="2025-02-13T19:20:00.498374785Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" returns successfully" Feb 13 19:20:00.664002 containerd[1509]: time="2025-02-13T19:20:00.663914557Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:20:00.667349 containerd[1509]: time="2025-02-13T19:20:00.667241532Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:20:00.667349 containerd[1509]: time="2025-02-13T19:20:00.667342731Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:20:00.667448 containerd[1509]: time="2025-02-13T19:20:00.667346178Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:20:00.667642 containerd[1509]: time="2025-02-13T19:20:00.667622678Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:20:00.667684 containerd[1509]: time="2025-02-13T19:20:00.667642095Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:20:00.668476 containerd[1509]: time="2025-02-13T19:20:00.668446697Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:20:00.668590 containerd[1509]: time="2025-02-13T19:20:00.668535765Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:20:00.668590 containerd[1509]: time="2025-02-13T19:20:00.668576451Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:20:00.669125 kubelet[2625]: E0213 19:20:00.669100 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:00.669901 containerd[1509]: time="2025-02-13T19:20:00.669830648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:6,}" Feb 13 19:20:00.670816 kubelet[2625]: I0213 19:20:00.670790 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e" Feb 13 19:20:00.672036 containerd[1509]: time="2025-02-13T19:20:00.672002601Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" Feb 13 19:20:00.672234 containerd[1509]: time="2025-02-13T19:20:00.672210912Z" level=info msg="Ensure that sandbox 19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e in task-service has been cleanup successfully" Feb 13 19:20:00.672791 containerd[1509]: time="2025-02-13T19:20:00.672441976Z" level=info msg="TearDown network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" successfully" Feb 13 19:20:00.672791 containerd[1509]: time="2025-02-13T19:20:00.672463908Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" returns successfully" Feb 13 19:20:00.673411 containerd[1509]: time="2025-02-13T19:20:00.673266296Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:20:00.673411 containerd[1509]: time="2025-02-13T19:20:00.673344543Z" level=info msg="TearDown network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" successfully" Feb 13 19:20:00.673411 containerd[1509]: time="2025-02-13T19:20:00.673353710Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" returns successfully" Feb 13 19:20:00.673579 containerd[1509]: time="2025-02-13T19:20:00.673552063Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:20:00.673644 containerd[1509]: time="2025-02-13T19:20:00.673630581Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:20:00.673667 containerd[1509]: time="2025-02-13T19:20:00.673642934Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:20:00.674037 containerd[1509]: time="2025-02-13T19:20:00.674013170Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:20:00.674106 containerd[1509]: time="2025-02-13T19:20:00.674075638Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:20:00.674234 containerd[1509]: time="2025-02-13T19:20:00.674175435Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:20:00.674234 containerd[1509]: time="2025-02-13T19:20:00.674229817Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:20:00.674396 containerd[1509]: time="2025-02-13T19:20:00.674085767Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:20:00.674396 containerd[1509]: time="2025-02-13T19:20:00.674290632Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:20:00.674993 containerd[1509]: time="2025-02-13T19:20:00.674815138Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:20:00.674993 containerd[1509]: time="2025-02-13T19:20:00.674902992Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:20:00.674993 containerd[1509]: time="2025-02-13T19:20:00.674949770Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:20:00.675096 containerd[1509]: time="2025-02-13T19:20:00.675029130Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:20:00.675096 containerd[1509]: time="2025-02-13T19:20:00.675037826Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:20:00.675096 containerd[1509]: time="2025-02-13T19:20:00.674954649Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:20:00.675240 kubelet[2625]: E0213 19:20:00.675212 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:00.676384 containerd[1509]: time="2025-02-13T19:20:00.676350974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:20:00.676587 containerd[1509]: time="2025-02-13T19:20:00.676567321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:5,}" Feb 13 19:20:01.378996 systemd[1]: run-netns-cni\x2d849c0511\x2d187b\x2da190\x2db55b\x2dcc4b9b6628eb.mount: Deactivated successfully. Feb 13 19:20:01.379375 systemd[1]: run-netns-cni\x2dcee6f488\x2d4a30\x2d74c1\x2d6dac\x2dfdfa59588f98.mount: Deactivated successfully. Feb 13 19:20:01.857888 kubelet[2625]: I0213 19:20:01.857858 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:20:01.858673 kubelet[2625]: E0213 19:20:01.858656 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:02.247968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751809265.mount: Deactivated successfully. Feb 13 19:20:02.695157 kubelet[2625]: E0213 19:20:02.695127 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:02.993584 containerd[1509]: time="2025-02-13T19:20:02.993455261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:03.000965 containerd[1509]: time="2025-02-13T19:20:03.000765478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:20:03.002116 containerd[1509]: time="2025-02-13T19:20:03.002067464Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:03.008266 containerd[1509]: time="2025-02-13T19:20:03.008223270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:03.009948 containerd[1509]: time="2025-02-13T19:20:03.009387247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.667227915s" Feb 13 19:20:03.010073 containerd[1509]: time="2025-02-13T19:20:03.010033492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:20:03.022489 containerd[1509]: time="2025-02-13T19:20:03.022447938Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:20:03.053086 containerd[1509]: time="2025-02-13T19:20:03.053038845Z" level=info msg="CreateContainer within sandbox \"6c4f917d51eab3541c3b760cca2c971ba507c0e3dc4cb0fd939ba088493a26eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1b975d59bb27e8a031fc7f6960effa2a5a9f037cb672d6452c212ce586d0720d\"" Feb 13 19:20:03.053693 containerd[1509]: time="2025-02-13T19:20:03.053660795Z" level=info msg="StartContainer for \"1b975d59bb27e8a031fc7f6960effa2a5a9f037cb672d6452c212ce586d0720d\"" Feb 13 19:20:03.104820 containerd[1509]: time="2025-02-13T19:20:03.104758913Z" level=error msg="Failed to destroy network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.105953 containerd[1509]: time="2025-02-13T19:20:03.105906590Z" level=error msg="encountered an error cleaning up failed sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.106007 containerd[1509]: time="2025-02-13T19:20:03.105981682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.106168 kubelet[2625]: E0213 19:20:03.106135 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.106494 kubelet[2625]: E0213 19:20:03.106184 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:20:03.106494 kubelet[2625]: E0213 19:20:03.106203 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gj6hs" Feb 13 19:20:03.106494 kubelet[2625]: E0213 19:20:03.106242 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gj6hs_calico-system(4c0c44a2-2d4f-44a3-b176-d65ebad0fd01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gj6hs" podUID="4c0c44a2-2d4f-44a3-b176-d65ebad0fd01" Feb 13 19:20:03.107486 containerd[1509]: time="2025-02-13T19:20:03.107451523Z" level=error msg="Failed to destroy network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.108147 containerd[1509]: time="2025-02-13T19:20:03.108109529Z" level=error msg="encountered an error cleaning up failed sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.108260 containerd[1509]: time="2025-02-13T19:20:03.108242129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.108593 kubelet[2625]: E0213 19:20:03.108465 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.108593 kubelet[2625]: E0213 19:20:03.108496 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:20:03.108593 kubelet[2625]: E0213 19:20:03.108519 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dqfv5" Feb 13 19:20:03.108712 kubelet[2625]: E0213 19:20:03.108553 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dqfv5_kube-system(2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dqfv5" podUID="2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e" Feb 13 19:20:03.109071 containerd[1509]: time="2025-02-13T19:20:03.109045468Z" level=error msg="Failed to destroy network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.110185 containerd[1509]: time="2025-02-13T19:20:03.110101993Z" level=error msg="encountered an error cleaning up failed sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.110185 containerd[1509]: time="2025-02-13T19:20:03.110140857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.110282 kubelet[2625]: E0213 19:20:03.110263 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.110393 kubelet[2625]: E0213 19:20:03.110289 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:20:03.110393 kubelet[2625]: E0213 19:20:03.110330 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" Feb 13 19:20:03.110393 kubelet[2625]: E0213 19:20:03.110354 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5467b9d745-75rrp_calico-system(12aa1040-68e2-4470-b0af-95b247e00e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podUID="12aa1040-68e2-4470-b0af-95b247e00e85" Feb 13 19:20:03.111830 containerd[1509]: time="2025-02-13T19:20:03.111781588Z" level=error msg="Failed to destroy network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.112313 containerd[1509]: time="2025-02-13T19:20:03.112207038Z" level=error msg="encountered an error cleaning up failed sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.112313 containerd[1509]: time="2025-02-13T19:20:03.112266019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.112448 kubelet[2625]: E0213 19:20:03.112424 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.112482 kubelet[2625]: E0213 19:20:03.112454 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:20:03.112482 kubelet[2625]: E0213 19:20:03.112471 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szpfw" Feb 13 19:20:03.112537 kubelet[2625]: E0213 19:20:03.112503 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szpfw_kube-system(d78af842-8204-4eb8-8b0d-729f562f41c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szpfw" podUID="d78af842-8204-4eb8-8b0d-729f562f41c9" Feb 13 19:20:03.120315 containerd[1509]: time="2025-02-13T19:20:03.120245512Z" level=error msg="Failed to destroy network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.120688 containerd[1509]: time="2025-02-13T19:20:03.120662656Z" level=error msg="encountered an error cleaning up failed sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.120736 containerd[1509]: time="2025-02-13T19:20:03.120718280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.120965 kubelet[2625]: E0213 19:20:03.120917 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.121046 kubelet[2625]: E0213 19:20:03.120983 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:20:03.121046 kubelet[2625]: E0213 19:20:03.121005 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" Feb 13 19:20:03.121111 kubelet[2625]: E0213 19:20:03.121047 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-bldmf_calico-apiserver(445077cf-6de7-4ccc-a14d-002ec401e21f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podUID="445077cf-6de7-4ccc-a14d-002ec401e21f" Feb 13 19:20:03.124541 containerd[1509]: time="2025-02-13T19:20:03.124454701Z" level=error msg="Failed to destroy network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.125005 containerd[1509]: time="2025-02-13T19:20:03.124972483Z" level=error msg="encountered an error cleaning up failed sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.125070 containerd[1509]: time="2025-02-13T19:20:03.125047425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.125295 kubelet[2625]: E0213 19:20:03.125255 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:20:03.125413 kubelet[2625]: E0213 19:20:03.125307 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:20:03.125413 kubelet[2625]: E0213 19:20:03.125336 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" Feb 13 19:20:03.125413 kubelet[2625]: E0213 19:20:03.125376 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd559d499-qrdn4_calico-apiserver(853008d7-8935-4029-ae11-bd5e471b4687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podUID="853008d7-8935-4029-ae11-bd5e471b4687" Feb 13 19:20:03.136117 systemd[1]: Started cri-containerd-1b975d59bb27e8a031fc7f6960effa2a5a9f037cb672d6452c212ce586d0720d.scope - libcontainer container 1b975d59bb27e8a031fc7f6960effa2a5a9f037cb672d6452c212ce586d0720d. Feb 13 19:20:03.170482 containerd[1509]: time="2025-02-13T19:20:03.170434433Z" level=info msg="StartContainer for \"1b975d59bb27e8a031fc7f6960effa2a5a9f037cb672d6452c212ce586d0720d\" returns successfully" Feb 13 19:20:03.237697 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:20:03.237840 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:20:03.698703 kubelet[2625]: I0213 19:20:03.698667 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e" Feb 13 19:20:03.702326 containerd[1509]: time="2025-02-13T19:20:03.702274389Z" level=info msg="StopPodSandbox for \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\"" Feb 13 19:20:03.702722 containerd[1509]: time="2025-02-13T19:20:03.702630058Z" level=info msg="Ensure that sandbox 623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e in task-service has been cleanup successfully" Feb 13 19:20:03.703434 containerd[1509]: time="2025-02-13T19:20:03.703375769Z" level=info msg="TearDown network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" successfully" Feb 13 19:20:03.703434 containerd[1509]: time="2025-02-13T19:20:03.703425182Z" level=info msg="StopPodSandbox for \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" returns successfully" Feb 13 19:20:03.704140 containerd[1509]: time="2025-02-13T19:20:03.704093979Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" Feb 13 19:20:03.704246 containerd[1509]: time="2025-02-13T19:20:03.704196000Z" level=info msg="TearDown network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" successfully" Feb 13 19:20:03.704320 containerd[1509]: time="2025-02-13T19:20:03.704244842Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" returns successfully" Feb 13 19:20:03.704670 containerd[1509]: time="2025-02-13T19:20:03.704641447Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:20:03.704808 containerd[1509]: time="2025-02-13T19:20:03.704774858Z" level=info msg="TearDown network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" successfully" Feb 13 19:20:03.704808 containerd[1509]: time="2025-02-13T19:20:03.704802269Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" returns successfully" Feb 13 19:20:03.705288 kubelet[2625]: E0213 19:20:03.705262 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:03.705970 containerd[1509]: time="2025-02-13T19:20:03.705761181Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:20:03.705970 containerd[1509]: time="2025-02-13T19:20:03.705869244Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:20:03.705970 containerd[1509]: time="2025-02-13T19:20:03.705893429Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:20:03.706295 containerd[1509]: time="2025-02-13T19:20:03.706264447Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:20:03.706404 containerd[1509]: time="2025-02-13T19:20:03.706380855Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:20:03.706440 containerd[1509]: time="2025-02-13T19:20:03.706404129Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:20:03.707783 containerd[1509]: time="2025-02-13T19:20:03.707643408Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:20:03.707783 containerd[1509]: time="2025-02-13T19:20:03.707723067Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:20:03.707783 containerd[1509]: time="2025-02-13T19:20:03.707756310Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:20:03.708666 containerd[1509]: time="2025-02-13T19:20:03.708648396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:20:03.709763 kubelet[2625]: I0213 19:20:03.709741 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420" Feb 13 19:20:03.728362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420-shm.mount: Deactivated successfully. Feb 13 19:20:03.728507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d-shm.mount: Deactivated successfully. Feb 13 19:20:03.728604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb-shm.mount: Deactivated successfully. Feb 13 19:20:03.738273 containerd[1509]: time="2025-02-13T19:20:03.738220871Z" level=info msg="StopPodSandbox for \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\"" Feb 13 19:20:03.738924 containerd[1509]: time="2025-02-13T19:20:03.738486399Z" level=info msg="Ensure that sandbox b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420 in task-service has been cleanup successfully" Feb 13 19:20:03.741056 containerd[1509]: time="2025-02-13T19:20:03.741021132Z" level=info msg="TearDown network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" successfully" Feb 13 19:20:03.741131 containerd[1509]: time="2025-02-13T19:20:03.741055416Z" level=info msg="StopPodSandbox for \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" returns successfully" Feb 13 19:20:03.743118 containerd[1509]: time="2025-02-13T19:20:03.741900915Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" Feb 13 19:20:03.743358 containerd[1509]: time="2025-02-13T19:20:03.743202962Z" level=info msg="TearDown network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" successfully" Feb 13 19:20:03.743358 containerd[1509]: time="2025-02-13T19:20:03.743220374Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" returns successfully" Feb 13 19:20:03.744302 containerd[1509]: time="2025-02-13T19:20:03.743500801Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:20:03.744302 containerd[1509]: time="2025-02-13T19:20:03.743574980Z" level=info msg="TearDown network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" successfully" Feb 13 19:20:03.744302 containerd[1509]: time="2025-02-13T19:20:03.743584087Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" returns successfully" Feb 13 19:20:03.744412 systemd[1]: run-netns-cni\x2d5cd61f0c\x2d76e7\x2dc1b1\x2d7082\x2d89272c1492ad.mount: Deactivated successfully. Feb 13 19:20:03.750955 containerd[1509]: time="2025-02-13T19:20:03.748553044Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:20:03.750955 containerd[1509]: time="2025-02-13T19:20:03.748656548Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:20:03.750955 containerd[1509]: time="2025-02-13T19:20:03.748666036Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:20:03.751508 kubelet[2625]: I0213 19:20:03.751483 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669" Feb 13 19:20:03.752039 containerd[1509]: time="2025-02-13T19:20:03.751987777Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:20:03.752139 containerd[1509]: time="2025-02-13T19:20:03.752116550Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:20:03.752139 containerd[1509]: time="2025-02-13T19:20:03.752135996Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:20:03.754654 containerd[1509]: time="2025-02-13T19:20:03.754628089Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:20:03.755984 containerd[1509]: time="2025-02-13T19:20:03.754719731Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:20:03.755984 containerd[1509]: time="2025-02-13T19:20:03.754734118Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:20:03.755984 containerd[1509]: time="2025-02-13T19:20:03.754793158Z" level=info msg="StopPodSandbox for \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\"" Feb 13 19:20:03.755984 containerd[1509]: time="2025-02-13T19:20:03.754968789Z" level=info msg="Ensure that sandbox 3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669 in task-service has been cleanup successfully" Feb 13 19:20:03.759258 containerd[1509]: time="2025-02-13T19:20:03.759178128Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:20:03.759344 containerd[1509]: time="2025-02-13T19:20:03.759279217Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:20:03.759344 containerd[1509]: time="2025-02-13T19:20:03.759292442Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:20:03.760129 systemd[1]: run-netns-cni\x2d784784ba\x2df009\x2d532a\x2d1df4\x2de16116c47c8c.mount: Deactivated successfully. Feb 13 19:20:03.764154 containerd[1509]: time="2025-02-13T19:20:03.762853624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:7,}" Feb 13 19:20:03.768965 containerd[1509]: time="2025-02-13T19:20:03.763236783Z" level=info msg="TearDown network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" successfully" Feb 13 19:20:03.768965 containerd[1509]: time="2025-02-13T19:20:03.766064767Z" level=info msg="StopPodSandbox for \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" returns successfully" Feb 13 19:20:03.770168 containerd[1509]: time="2025-02-13T19:20:03.770133752Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" Feb 13 19:20:03.770683 containerd[1509]: time="2025-02-13T19:20:03.770377440Z" level=info msg="TearDown network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" successfully" Feb 13 19:20:03.770683 containerd[1509]: time="2025-02-13T19:20:03.770396937Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" returns successfully" Feb 13 19:20:03.771535 containerd[1509]: time="2025-02-13T19:20:03.771512523Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:20:03.774105 containerd[1509]: time="2025-02-13T19:20:03.774073615Z" level=info msg="TearDown network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" successfully" Feb 13 19:20:03.774434 containerd[1509]: time="2025-02-13T19:20:03.774214400Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" returns successfully" Feb 13 19:20:03.777016 containerd[1509]: time="2025-02-13T19:20:03.775484927Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:20:03.777016 containerd[1509]: time="2025-02-13T19:20:03.775566700Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:20:03.777016 containerd[1509]: time="2025-02-13T19:20:03.775575907Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:20:03.780792 kubelet[2625]: I0213 19:20:03.779564 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf" Feb 13 19:20:03.780917 containerd[1509]: time="2025-02-13T19:20:03.780140515Z" level=info msg="StopPodSandbox for \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\"" Feb 13 19:20:03.780917 containerd[1509]: time="2025-02-13T19:20:03.780329038Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:20:03.780917 containerd[1509]: time="2025-02-13T19:20:03.780373743Z" level=info msg="Ensure that sandbox 1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf in task-service has been cleanup successfully" Feb 13 19:20:03.780917 containerd[1509]: time="2025-02-13T19:20:03.780472628Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:20:03.780917 containerd[1509]: time="2025-02-13T19:20:03.780488278Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:20:03.781946 containerd[1509]: time="2025-02-13T19:20:03.781070822Z" level=info msg="TearDown network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" successfully" Feb 13 19:20:03.781946 containerd[1509]: time="2025-02-13T19:20:03.781093385Z" level=info msg="StopPodSandbox for \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" returns successfully" Feb 13 19:20:03.784536 containerd[1509]: time="2025-02-13T19:20:03.784508090Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:20:03.786433 systemd[1]: run-netns-cni\x2dd5704e7e\x2d839f\x2d2b86\x2d6c77\x2d8c00cea52594.mount: Deactivated successfully. Feb 13 19:20:03.786767 containerd[1509]: time="2025-02-13T19:20:03.786745344Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:20:03.786848 containerd[1509]: time="2025-02-13T19:20:03.786834151Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:20:03.787860 containerd[1509]: time="2025-02-13T19:20:03.787819242Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" Feb 13 19:20:03.788028 containerd[1509]: time="2025-02-13T19:20:03.787996024Z" level=info msg="TearDown network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" successfully" Feb 13 19:20:03.788028 containerd[1509]: time="2025-02-13T19:20:03.788025690Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" returns successfully" Feb 13 19:20:03.788652 containerd[1509]: time="2025-02-13T19:20:03.788277423Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:20:03.788652 containerd[1509]: time="2025-02-13T19:20:03.788393300Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:20:03.788652 containerd[1509]: time="2025-02-13T19:20:03.788403219Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:20:03.788844 kubelet[2625]: E0213 19:20:03.788811 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:03.791794 containerd[1509]: time="2025-02-13T19:20:03.791757932Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:20:03.792552 containerd[1509]: time="2025-02-13T19:20:03.791783320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:7,}" Feb 13 19:20:03.792552 containerd[1509]: time="2025-02-13T19:20:03.792475401Z" level=info msg="TearDown network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" successfully" Feb 13 19:20:03.792552 containerd[1509]: time="2025-02-13T19:20:03.792492793Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" returns successfully" Feb 13 19:20:03.793866 containerd[1509]: time="2025-02-13T19:20:03.793724137Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:20:03.793866 containerd[1509]: time="2025-02-13T19:20:03.793813445Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:20:03.793866 containerd[1509]: time="2025-02-13T19:20:03.793823845Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:20:03.794678 kubelet[2625]: I0213 19:20:03.794162 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb" Feb 13 19:20:03.794866 containerd[1509]: time="2025-02-13T19:20:03.794846726Z" level=info msg="StopPodSandbox for \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\"" Feb 13 19:20:03.795193 containerd[1509]: time="2025-02-13T19:20:03.795171848Z" level=info msg="Ensure that sandbox aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb in task-service has been cleanup successfully" Feb 13 19:20:03.795647 containerd[1509]: time="2025-02-13T19:20:03.795623055Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:20:03.796142 containerd[1509]: time="2025-02-13T19:20:03.796072790Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:20:03.796223 containerd[1509]: time="2025-02-13T19:20:03.796206842Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:20:03.796929 containerd[1509]: time="2025-02-13T19:20:03.795724735Z" level=info msg="TearDown network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" successfully" Feb 13 19:20:03.796929 containerd[1509]: time="2025-02-13T19:20:03.796340353Z" level=info msg="StopPodSandbox for \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" returns successfully" Feb 13 19:20:03.796929 containerd[1509]: time="2025-02-13T19:20:03.796816106Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:20:03.796929 containerd[1509]: time="2025-02-13T19:20:03.796908290Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:20:03.796929 containerd[1509]: time="2025-02-13T19:20:03.796918138Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:20:03.797116 containerd[1509]: time="2025-02-13T19:20:03.796974524Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" Feb 13 19:20:03.797116 containerd[1509]: time="2025-02-13T19:20:03.797049094Z" level=info msg="TearDown network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" successfully" Feb 13 19:20:03.797116 containerd[1509]: time="2025-02-13T19:20:03.797058181Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" returns successfully" Feb 13 19:20:03.797571 kubelet[2625]: E0213 19:20:03.797550 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:03.798328 containerd[1509]: time="2025-02-13T19:20:03.797947191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:6,}" Feb 13 19:20:03.798580 containerd[1509]: time="2025-02-13T19:20:03.798557288Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:20:03.799293 containerd[1509]: time="2025-02-13T19:20:03.799267863Z" level=info msg="TearDown network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" successfully" Feb 13 19:20:03.799462 containerd[1509]: time="2025-02-13T19:20:03.799420049Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" returns successfully" Feb 13 19:20:03.800397 containerd[1509]: time="2025-02-13T19:20:03.800163125Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:20:03.800397 containerd[1509]: time="2025-02-13T19:20:03.800255680Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:20:03.800397 containerd[1509]: time="2025-02-13T19:20:03.800267772Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:20:03.801353 containerd[1509]: time="2025-02-13T19:20:03.801299690Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:20:03.801535 containerd[1509]: time="2025-02-13T19:20:03.801516378Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:20:03.801608 containerd[1509]: time="2025-02-13T19:20:03.801592691Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:20:03.802389 containerd[1509]: time="2025-02-13T19:20:03.802021437Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:20:03.802389 containerd[1509]: time="2025-02-13T19:20:03.802104994Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:20:03.802389 containerd[1509]: time="2025-02-13T19:20:03.802115583Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:20:03.802910 containerd[1509]: time="2025-02-13T19:20:03.802883617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:6,}" Feb 13 19:20:03.803504 kubelet[2625]: I0213 19:20:03.803430 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d" Feb 13 19:20:03.805078 containerd[1509]: time="2025-02-13T19:20:03.805036843Z" level=info msg="StopPodSandbox for \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\"" Feb 13 19:20:03.805899 containerd[1509]: time="2025-02-13T19:20:03.805269871Z" level=info msg="Ensure that sandbox 96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d in task-service has been cleanup successfully" Feb 13 19:20:03.805899 containerd[1509]: time="2025-02-13T19:20:03.805557411Z" level=info msg="TearDown network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" successfully" Feb 13 19:20:03.805899 containerd[1509]: time="2025-02-13T19:20:03.805570185Z" level=info msg="StopPodSandbox for \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" returns successfully" Feb 13 19:20:03.807989 containerd[1509]: time="2025-02-13T19:20:03.807552830Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" Feb 13 19:20:03.807989 containerd[1509]: time="2025-02-13T19:20:03.807672334Z" level=info msg="TearDown network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" successfully" Feb 13 19:20:03.807989 containerd[1509]: time="2025-02-13T19:20:03.807684167Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" returns successfully" Feb 13 19:20:03.808101 containerd[1509]: time="2025-02-13T19:20:03.808049673Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:20:03.808236 containerd[1509]: time="2025-02-13T19:20:03.808142819Z" level=info msg="TearDown network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" successfully" Feb 13 19:20:03.808268 containerd[1509]: time="2025-02-13T19:20:03.808223330Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" returns successfully" Feb 13 19:20:03.808659 containerd[1509]: time="2025-02-13T19:20:03.808548020Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:20:03.809021 containerd[1509]: time="2025-02-13T19:20:03.809002764Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:20:03.809354 containerd[1509]: time="2025-02-13T19:20:03.809107861Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:20:03.809707 containerd[1509]: time="2025-02-13T19:20:03.809691879Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:20:03.809822 containerd[1509]: time="2025-02-13T19:20:03.809808839Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:20:03.809960 containerd[1509]: time="2025-02-13T19:20:03.809945786Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:20:03.810354 containerd[1509]: time="2025-02-13T19:20:03.810339065Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:20:03.810507 containerd[1509]: time="2025-02-13T19:20:03.810494086Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:20:03.810906 containerd[1509]: time="2025-02-13T19:20:03.810889910Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:20:03.811471 containerd[1509]: time="2025-02-13T19:20:03.811440114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:20:04.034887 systemd-networkd[1438]: cali89ed57f99de: Link UP Feb 13 19:20:04.035180 systemd-networkd[1438]: cali89ed57f99de: Gained carrier Feb 13 19:20:04.065332 kubelet[2625]: I0213 19:20:04.064614 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-llbc5" podStartSLOduration=2.269915372 podStartE2EDuration="26.064580902s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:19:39.2161014 +0000 UTC m=+14.033897409" lastFinishedPulling="2025-02-13 19:20:03.01076693 +0000 UTC m=+37.828562939" observedRunningTime="2025-02-13 19:20:03.746037948 +0000 UTC m=+38.563833977" watchObservedRunningTime="2025-02-13 19:20:04.064580902 +0000 UTC m=+38.882376911" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.892 [INFO][4976] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.904 [INFO][4976] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--szpfw-eth0 coredns-668d6bf9bc- kube-system d78af842-8204-4eb8-8b0d-729f562f41c9 740 0 2025-02-13 19:19:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-szpfw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89ed57f99de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.906 [INFO][4976] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.975 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" HandleID="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Workload="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.992 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" HandleID="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Workload="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002651d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-szpfw", "timestamp":"2025-02-13 19:20:03.975065483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.993 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:03.997 [INFO][5043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.002 [INFO][5043] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.007 [INFO][5043] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.009 [INFO][5043] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.011 [INFO][5043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.011 [INFO][5043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.013 [INFO][5043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15 Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.016 [INFO][5043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" host="localhost" Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.067068 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" HandleID="k8s-pod-network.f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Workload="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.025 [INFO][4976] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--szpfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d78af842-8204-4eb8-8b0d-729f562f41c9", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-szpfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89ed57f99de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.025 [INFO][4976] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.025 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89ed57f99de ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.035 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.035 [INFO][4976] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--szpfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d78af842-8204-4eb8-8b0d-729f562f41c9", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15", Pod:"coredns-668d6bf9bc-szpfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89ed57f99de", MAC:"36:67:be:f6:3a:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.068198 containerd[1509]: 2025-02-13 19:20:04.064 [INFO][4976] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15" Namespace="kube-system" Pod="coredns-668d6bf9bc-szpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szpfw-eth0" Feb 13 19:20:04.146003 containerd[1509]: time="2025-02-13T19:20:04.145858468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.146003 containerd[1509]: time="2025-02-13T19:20:04.145923551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.146003 containerd[1509]: time="2025-02-13T19:20:04.145954639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.146325 containerd[1509]: time="2025-02-13T19:20:04.146142381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.180249 systemd[1]: Started cri-containerd-f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15.scope - libcontainer container f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15. Feb 13 19:20:04.195291 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.230065 containerd[1509]: time="2025-02-13T19:20:04.229574716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szpfw,Uid:d78af842-8204-4eb8-8b0d-729f562f41c9,Namespace:kube-system,Attempt:7,} returns sandbox id \"f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15\"" Feb 13 19:20:04.230148 systemd-networkd[1438]: cali20b734b5597: Link UP Feb 13 19:20:04.231435 systemd-networkd[1438]: cali20b734b5597: Gained carrier Feb 13 19:20:04.232746 kubelet[2625]: E0213 19:20:04.231745 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:04.237214 containerd[1509]: time="2025-02-13T19:20:04.236952227Z" level=info msg="CreateContainer within sandbox \"f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.899 [INFO][4985] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.914 [INFO][4985] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0 coredns-668d6bf9bc- kube-system 2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e 749 0 2025-02-13 19:19:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dqfv5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali20b734b5597 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.914 [INFO][4985] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.975 [INFO][5047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" HandleID="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Workload="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" HandleID="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Workload="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005020a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dqfv5", "timestamp":"2025-02-13 19:20:03.974736084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.022 [INFO][5047] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.097 [INFO][5047] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.154 [INFO][5047] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.161 [INFO][5047] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.163 [INFO][5047] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.166 [INFO][5047] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.166 [INFO][5047] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.168 [INFO][5047] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29 Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.200 [INFO][5047] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.221 [INFO][5047] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.222 [INFO][5047] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" host="localhost" Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.222 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.249337 containerd[1509]: 2025-02-13 19:20:04.222 [INFO][5047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" HandleID="k8s-pod-network.c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Workload="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.225 [INFO][4985] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dqfv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20b734b5597", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.226 [INFO][4985] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.226 [INFO][4985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20b734b5597 ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.232 [INFO][4985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.233 [INFO][4985] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29", Pod:"coredns-668d6bf9bc-dqfv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali20b734b5597", MAC:"76:d1:75:5c:08:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.250045 containerd[1509]: 2025-02-13 19:20:04.246 [INFO][4985] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29" Namespace="kube-system" Pod="coredns-668d6bf9bc-dqfv5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dqfv5-eth0" Feb 13 19:20:04.275875 containerd[1509]: time="2025-02-13T19:20:04.274905459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.275875 containerd[1509]: time="2025-02-13T19:20:04.275711122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.275875 containerd[1509]: time="2025-02-13T19:20:04.275724167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.276093 containerd[1509]: time="2025-02-13T19:20:04.275864310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.297119 systemd[1]: Started cri-containerd-c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29.scope - libcontainer container c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29. Feb 13 19:20:04.313175 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.322172 systemd-networkd[1438]: cali08d72628914: Link UP Feb 13 19:20:04.322516 systemd-networkd[1438]: cali08d72628914: Gained carrier Feb 13 19:20:04.325009 containerd[1509]: time="2025-02-13T19:20:04.324772755Z" level=info msg="CreateContainer within sandbox \"f74b8589b42babbaba3074931e567455fac664e1dde45d42debcd630d4110a15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4d5f12837f27b6b3128c3bd6be76de53e2653e4b355d30372282b303cd54ace\"" Feb 13 19:20:04.327041 containerd[1509]: time="2025-02-13T19:20:04.327021740Z" level=info msg="StartContainer for \"f4d5f12837f27b6b3128c3bd6be76de53e2653e4b355d30372282b303cd54ace\"" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.842 [INFO][4952] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.856 [INFO][4952] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0 calico-kube-controllers-5467b9d745- calico-system 12aa1040-68e2-4470-b0af-95b247e00e85 742 0 2025-02-13 19:19:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5467b9d745 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5467b9d745-75rrp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali08d72628914 [] []}} ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.856 [INFO][4952] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.981 [INFO][4983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" HandleID="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Workload="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][4983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" HandleID="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Workload="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ef10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5467b9d745-75rrp", "timestamp":"2025-02-13 19:20:03.981151819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.222 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.222 [INFO][4983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.224 [INFO][4983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.256 [INFO][4983] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.261 [INFO][4983] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.263 [INFO][4983] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.265 [INFO][4983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.265 [INFO][4983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.267 [INFO][4983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.304 [INFO][4983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.315 [INFO][4983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.315 [INFO][4983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" host="localhost" Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.315 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.341584 containerd[1509]: 2025-02-13 19:20:04.315 [INFO][4983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" HandleID="k8s-pod-network.946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Workload="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.319 [INFO][4952] cni-plugin/k8s.go 386: Populated endpoint ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0", GenerateName:"calico-kube-controllers-5467b9d745-", Namespace:"calico-system", SelfLink:"", UID:"12aa1040-68e2-4470-b0af-95b247e00e85", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5467b9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5467b9d745-75rrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08d72628914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.319 [INFO][4952] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.319 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08d72628914 ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.323 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.323 [INFO][4952] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0", GenerateName:"calico-kube-controllers-5467b9d745-", Namespace:"calico-system", SelfLink:"", UID:"12aa1040-68e2-4470-b0af-95b247e00e85", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5467b9d745", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e", Pod:"calico-kube-controllers-5467b9d745-75rrp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali08d72628914", MAC:"0e:e2:0d:4d:3a:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.342401 containerd[1509]: 2025-02-13 19:20:04.335 [INFO][4952] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e" Namespace="calico-system" Pod="calico-kube-controllers-5467b9d745-75rrp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5467b9d745--75rrp-eth0" Feb 13 19:20:04.355948 containerd[1509]: time="2025-02-13T19:20:04.355575915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dqfv5,Uid:2dbb4619-b530-4c82-b5dd-b3f7d0fb4c0e,Namespace:kube-system,Attempt:6,} returns sandbox id \"c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29\"" Feb 13 19:20:04.357034 kubelet[2625]: E0213 19:20:04.356822 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:04.363102 containerd[1509]: time="2025-02-13T19:20:04.363012947Z" level=info msg="CreateContainer within sandbox \"c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:20:04.372178 systemd[1]: Started cri-containerd-f4d5f12837f27b6b3128c3bd6be76de53e2653e4b355d30372282b303cd54ace.scope - libcontainer container f4d5f12837f27b6b3128c3bd6be76de53e2653e4b355d30372282b303cd54ace. Feb 13 19:20:04.373972 containerd[1509]: time="2025-02-13T19:20:04.373793540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.373972 containerd[1509]: time="2025-02-13T19:20:04.373912714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.373972 containerd[1509]: time="2025-02-13T19:20:04.373947318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.374119 containerd[1509]: time="2025-02-13T19:20:04.374073726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.383696 containerd[1509]: time="2025-02-13T19:20:04.383531645Z" level=info msg="CreateContainer within sandbox \"c6a9eb688c257c110ce780dacf63f9602808ab677551dbf0aad838895d8e6f29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6a8cd786affddb94724869879dd8c19134a5f53df74ad88c6b8b52f71ec3cbb\"" Feb 13 19:20:04.385414 containerd[1509]: time="2025-02-13T19:20:04.384866944Z" level=info msg="StartContainer for \"c6a8cd786affddb94724869879dd8c19134a5f53df74ad88c6b8b52f71ec3cbb\"" Feb 13 19:20:04.398417 systemd-networkd[1438]: calicce75eae0c0: Link UP Feb 13 19:20:04.398721 systemd-networkd[1438]: calicce75eae0c0: Gained carrier Feb 13 19:20:04.405289 systemd[1]: Started cri-containerd-946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e.scope - libcontainer container 946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e. Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.820 [INFO][4924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.846 [INFO][4924] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0 calico-apiserver-7cd559d499- calico-apiserver 853008d7-8935-4029-ae11-bd5e471b4687 748 0 2025-02-13 19:19:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd559d499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd559d499-qrdn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicce75eae0c0 [] []}} ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.846 [INFO][4924] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.975 [INFO][4973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" HandleID="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Workload="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][4973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" HandleID="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Workload="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b41e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd559d499-qrdn4", "timestamp":"2025-02-13 19:20:03.975203824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:03.995 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.316 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.316 [INFO][4973] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.327 [INFO][4973] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.360 [INFO][4973] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.371 [INFO][4973] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.373 [INFO][4973] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.375 [INFO][4973] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.375 [INFO][4973] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.377 [INFO][4973] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.381 [INFO][4973] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][4973] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][4973] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" host="localhost" Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.416428 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][4973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" HandleID="k8s-pod-network.82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Workload="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.394 [INFO][4924] cni-plugin/k8s.go 386: Populated endpoint ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0", GenerateName:"calico-apiserver-7cd559d499-", Namespace:"calico-apiserver", SelfLink:"", UID:"853008d7-8935-4029-ae11-bd5e471b4687", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd559d499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd559d499-qrdn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicce75eae0c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.395 [INFO][4924] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.395 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicce75eae0c0 ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.399 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.399 [INFO][4924] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0", GenerateName:"calico-apiserver-7cd559d499-", Namespace:"calico-apiserver", SelfLink:"", UID:"853008d7-8935-4029-ae11-bd5e471b4687", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd559d499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e", Pod:"calico-apiserver-7cd559d499-qrdn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicce75eae0c0", MAC:"ba:a2:73:48:8f:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.417630 containerd[1509]: 2025-02-13 19:20:04.410 [INFO][4924] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-qrdn4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--qrdn4-eth0" Feb 13 19:20:04.440980 containerd[1509]: time="2025-02-13T19:20:04.440924289Z" level=info msg="StartContainer for \"f4d5f12837f27b6b3128c3bd6be76de53e2653e4b355d30372282b303cd54ace\" returns successfully" Feb 13 19:20:04.445086 systemd[1]: Started cri-containerd-c6a8cd786affddb94724869879dd8c19134a5f53df74ad88c6b8b52f71ec3cbb.scope - libcontainer container c6a8cd786affddb94724869879dd8c19134a5f53df74ad88c6b8b52f71ec3cbb. Feb 13 19:20:04.450165 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.464026 containerd[1509]: time="2025-02-13T19:20:04.463765851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.464026 containerd[1509]: time="2025-02-13T19:20:04.463835863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.464026 containerd[1509]: time="2025-02-13T19:20:04.463850210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.464233 containerd[1509]: time="2025-02-13T19:20:04.463962460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.496078 systemd[1]: Started cri-containerd-82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e.scope - libcontainer container 82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e. Feb 13 19:20:04.502327 containerd[1509]: time="2025-02-13T19:20:04.502261493Z" level=info msg="StartContainer for \"c6a8cd786affddb94724869879dd8c19134a5f53df74ad88c6b8b52f71ec3cbb\" returns successfully" Feb 13 19:20:04.502551 containerd[1509]: time="2025-02-13T19:20:04.502371590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5467b9d745-75rrp,Uid:12aa1040-68e2-4470-b0af-95b247e00e85,Namespace:calico-system,Attempt:7,} returns sandbox id \"946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e\"" Feb 13 19:20:04.505237 containerd[1509]: time="2025-02-13T19:20:04.504963198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:20:04.512548 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.525426 systemd-networkd[1438]: calicd3c0bb9717: Link UP Feb 13 19:20:04.525683 systemd-networkd[1438]: calicd3c0bb9717: Gained carrier Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.915 [INFO][4995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.940 [INFO][4995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gj6hs-eth0 csi-node-driver- calico-system 4c0c44a2-2d4f-44a3-b176-d65ebad0fd01 641 0 2025-02-13 19:19:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gj6hs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd3c0bb9717 [] []}} ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.940 [INFO][4995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.982 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" HandleID="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Workload="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" HandleID="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Workload="localhost-k8s-csi--node--driver--gj6hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gj6hs", "timestamp":"2025-02-13 19:20:03.980890749 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:03.996 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.389 [INFO][5063] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.427 [INFO][5063] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.457 [INFO][5063] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.474 [INFO][5063] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.477 [INFO][5063] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.481 [INFO][5063] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.482 [INFO][5063] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.485 [INFO][5063] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.502 [INFO][5063] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5063] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5063] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" host="localhost" Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.545947 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" HandleID="k8s-pod-network.0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Workload="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.522 [INFO][4995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gj6hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01", ResourceVersion:"641", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gj6hs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd3c0bb9717", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.522 [INFO][4995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.522 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd3c0bb9717 ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.526 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.526 [INFO][4995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gj6hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c0c44a2-2d4f-44a3-b176-d65ebad0fd01", ResourceVersion:"641", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d", Pod:"csi-node-driver-gj6hs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd3c0bb9717", MAC:"ae:ce:38:b0:5f:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.546819 containerd[1509]: 2025-02-13 19:20:04.542 [INFO][4995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d" Namespace="calico-system" Pod="csi-node-driver-gj6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--gj6hs-eth0" Feb 13 19:20:04.547682 containerd[1509]: time="2025-02-13T19:20:04.547458976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-qrdn4,Uid:853008d7-8935-4029-ae11-bd5e471b4687,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e\"" Feb 13 19:20:04.569108 containerd[1509]: time="2025-02-13T19:20:04.568757499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.569108 containerd[1509]: time="2025-02-13T19:20:04.568840424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.569108 containerd[1509]: time="2025-02-13T19:20:04.568855072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.569108 containerd[1509]: time="2025-02-13T19:20:04.568997510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.588149 systemd[1]: Started cri-containerd-0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d.scope - libcontainer container 0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d. Feb 13 19:20:04.603669 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.610101 systemd-networkd[1438]: cali87947ab227d: Link UP Feb 13 19:20:04.612346 systemd-networkd[1438]: cali87947ab227d: Gained carrier Feb 13 19:20:04.633398 containerd[1509]: time="2025-02-13T19:20:04.633341953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gj6hs,Uid:4c0c44a2-2d4f-44a3-b176-d65ebad0fd01,Namespace:calico-system,Attempt:6,} returns sandbox id \"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d\"" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:03.914 [INFO][5015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:03.941 [INFO][5015] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0 calico-apiserver-7cd559d499- calico-apiserver 445077cf-6de7-4ccc-a14d-002ec401e21f 746 0 2025-02-13 19:19:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd559d499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd559d499-bldmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87947ab227d [] []}} ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:03.941 [INFO][5015] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:03.994 [INFO][5058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" HandleID="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Workload="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.001 [INFO][5058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" HandleID="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Workload="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000521130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd559d499-bldmf", "timestamp":"2025-02-13 19:20:03.994826994 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.001 [INFO][5058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.519 [INFO][5058] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.527 [INFO][5058] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.557 [INFO][5058] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.576 [INFO][5058] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.579 [INFO][5058] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.582 [INFO][5058] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.582 [INFO][5058] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.584 [INFO][5058] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752 Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.589 [INFO][5058] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.599 [INFO][5058] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.599 [INFO][5058] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" host="localhost" Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.599 [INFO][5058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:20:04.642065 containerd[1509]: 2025-02-13 19:20:04.599 [INFO][5058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" HandleID="k8s-pod-network.98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Workload="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.604 [INFO][5015] cni-plugin/k8s.go 386: Populated endpoint ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0", GenerateName:"calico-apiserver-7cd559d499-", Namespace:"calico-apiserver", SelfLink:"", UID:"445077cf-6de7-4ccc-a14d-002ec401e21f", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd559d499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd559d499-bldmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87947ab227d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.605 [INFO][5015] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.605 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87947ab227d ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.615 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.616 [INFO][5015] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0", GenerateName:"calico-apiserver-7cd559d499-", Namespace:"calico-apiserver", SelfLink:"", UID:"445077cf-6de7-4ccc-a14d-002ec401e21f", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd559d499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752", Pod:"calico-apiserver-7cd559d499-bldmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87947ab227d", MAC:"72:8b:34:3d:d2:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:20:04.643420 containerd[1509]: 2025-02-13 19:20:04.629 [INFO][5015] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752" Namespace="calico-apiserver" Pod="calico-apiserver-7cd559d499-bldmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd559d499--bldmf-eth0" Feb 13 19:20:04.705561 containerd[1509]: time="2025-02-13T19:20:04.705335559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:20:04.705561 containerd[1509]: time="2025-02-13T19:20:04.705389820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:20:04.705561 containerd[1509]: time="2025-02-13T19:20:04.705400110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.705561 containerd[1509]: time="2025-02-13T19:20:04.705481813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:20:04.753320 systemd[1]: run-netns-cni\x2dfd6d7b16\x2d6d10\x2d1f17\x2df92c\x2dda4b8691c80b.mount: Deactivated successfully. Feb 13 19:20:04.754303 systemd[1]: run-netns-cni\x2d845124df\x2de386\x2d22fa\x2d7f5d\x2deab8326513a0.mount: Deactivated successfully. Feb 13 19:20:04.778180 systemd[1]: Started cri-containerd-98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752.scope - libcontainer container 98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752. Feb 13 19:20:04.819107 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:20:04.820979 kubelet[2625]: E0213 19:20:04.820904 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:04.847197 kubelet[2625]: E0213 19:20:04.847154 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:04.857491 kubelet[2625]: I0213 19:20:04.857425 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dqfv5" podStartSLOduration=32.857406684 podStartE2EDuration="32.857406684s" podCreationTimestamp="2025-02-13 19:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:20:04.855098267 +0000 UTC m=+39.672894296" watchObservedRunningTime="2025-02-13 19:20:04.857406684 +0000 UTC m=+39.675202703" Feb 13 19:20:04.868620 kubelet[2625]: E0213 19:20:04.866338 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:04.879828 containerd[1509]: time="2025-02-13T19:20:04.879746845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd559d499-bldmf,Uid:445077cf-6de7-4ccc-a14d-002ec401e21f,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752\"" Feb 13 19:20:04.917991 kernel: bpftool[5625]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:20:05.170516 systemd-networkd[1438]: vxlan.calico: Link UP Feb 13 19:20:05.170527 systemd-networkd[1438]: vxlan.calico: Gained carrier Feb 13 19:20:05.188681 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Feb 13 19:20:05.258032 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:05.259986 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:05.264806 systemd-logind[1499]: New session 10 of user core. Feb 13 19:20:05.270110 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:20:05.437506 sshd[5674]: Connection closed by 10.0.0.1 port 49470 Feb 13 19:20:05.437752 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:05.442487 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:49470.service: Deactivated successfully. Feb 13 19:20:05.444923 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:20:05.445641 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:20:05.446677 systemd-logind[1499]: Removed session 10. Feb 13 19:20:05.869612 kubelet[2625]: E0213 19:20:05.869584 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:05.870005 kubelet[2625]: E0213 19:20:05.869770 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:05.892152 systemd-networkd[1438]: cali87947ab227d: Gained IPv6LL Feb 13 19:20:06.021339 systemd-networkd[1438]: cali89ed57f99de: Gained IPv6LL Feb 13 19:20:06.021764 systemd-networkd[1438]: calicce75eae0c0: Gained IPv6LL Feb 13 19:20:06.212071 systemd-networkd[1438]: cali20b734b5597: Gained IPv6LL Feb 13 19:20:06.212390 systemd-networkd[1438]: cali08d72628914: Gained IPv6LL Feb 13 19:20:06.532088 systemd-networkd[1438]: calicd3c0bb9717: Gained IPv6LL Feb 13 19:20:06.871900 kubelet[2625]: E0213 19:20:06.871755 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:06.871900 kubelet[2625]: E0213 19:20:06.871791 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:06.980207 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Feb 13 19:20:07.608663 containerd[1509]: time="2025-02-13T19:20:07.608595222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:07.609338 containerd[1509]: time="2025-02-13T19:20:07.609281551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:20:07.610786 containerd[1509]: time="2025-02-13T19:20:07.610750560Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:07.613804 containerd[1509]: time="2025-02-13T19:20:07.613734795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:07.614253 containerd[1509]: time="2025-02-13T19:20:07.614198777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.109185104s" Feb 13 19:20:07.614253 containerd[1509]: time="2025-02-13T19:20:07.614248460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:20:07.615122 containerd[1509]: time="2025-02-13T19:20:07.615099989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:20:07.622200 containerd[1509]: time="2025-02-13T19:20:07.622165379Z" level=info msg="CreateContainer within sandbox \"946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:20:07.636785 containerd[1509]: time="2025-02-13T19:20:07.636741364Z" level=info msg="CreateContainer within sandbox \"946ff8777ef19c9e175c2528fe7bbf5683ea3d593b65a04d7ca4f5b98971662e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"901c578b6aab1941aba33bc4f9aa47fe9224e81a040d7b7484c754650848790d\"" Feb 13 19:20:07.637401 containerd[1509]: time="2025-02-13T19:20:07.637342674Z" level=info msg="StartContainer for \"901c578b6aab1941aba33bc4f9aa47fe9224e81a040d7b7484c754650848790d\"" Feb 13 19:20:07.665099 systemd[1]: Started cri-containerd-901c578b6aab1941aba33bc4f9aa47fe9224e81a040d7b7484c754650848790d.scope - libcontainer container 901c578b6aab1941aba33bc4f9aa47fe9224e81a040d7b7484c754650848790d. Feb 13 19:20:07.848544 containerd[1509]: time="2025-02-13T19:20:07.848488730Z" level=info msg="StartContainer for \"901c578b6aab1941aba33bc4f9aa47fe9224e81a040d7b7484c754650848790d\" returns successfully" Feb 13 19:20:08.083620 kubelet[2625]: I0213 19:20:08.082892 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-szpfw" podStartSLOduration=35.082874498 podStartE2EDuration="35.082874498s" podCreationTimestamp="2025-02-13 19:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:20:05.031060982 +0000 UTC m=+39.848856991" watchObservedRunningTime="2025-02-13 19:20:08.082874498 +0000 UTC m=+42.900670517" Feb 13 19:20:08.083620 kubelet[2625]: I0213 19:20:08.083097 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5467b9d745-75rrp" podStartSLOduration=26.972617441 podStartE2EDuration="30.083091846s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:20:04.504485151 +0000 UTC m=+39.322281160" lastFinishedPulling="2025-02-13 19:20:07.614959566 +0000 UTC m=+42.432755565" observedRunningTime="2025-02-13 19:20:08.082631983 +0000 UTC m=+42.900428002" watchObservedRunningTime="2025-02-13 19:20:08.083091846 +0000 UTC m=+42.900887856" Feb 13 19:20:08.879033 kubelet[2625]: I0213 19:20:08.878994 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:20:10.454380 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:58408.service - OpenSSH per-connection server daemon (10.0.0.1:58408). Feb 13 19:20:10.512395 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 58408 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:10.514777 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:10.520662 systemd-logind[1499]: New session 11 of user core. Feb 13 19:20:10.529279 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:20:10.666448 sshd[5785]: Connection closed by 10.0.0.1 port 58408 Feb 13 19:20:10.666833 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:10.672756 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:58408.service: Deactivated successfully. Feb 13 19:20:10.675350 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:20:10.676266 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:20:10.677378 systemd-logind[1499]: Removed session 11. Feb 13 19:20:12.251833 containerd[1509]: time="2025-02-13T19:20:12.251721075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:12.253393 containerd[1509]: time="2025-02-13T19:20:12.253317442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:20:12.261518 containerd[1509]: time="2025-02-13T19:20:12.261452043Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:12.266653 containerd[1509]: time="2025-02-13T19:20:12.266589990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:12.267294 containerd[1509]: time="2025-02-13T19:20:12.267249918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.652119481s" Feb 13 19:20:12.267294 containerd[1509]: time="2025-02-13T19:20:12.267284042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:20:12.268415 containerd[1509]: time="2025-02-13T19:20:12.268255406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:20:12.269429 containerd[1509]: time="2025-02-13T19:20:12.269390086Z" level=info msg="CreateContainer within sandbox \"82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:20:12.388843 containerd[1509]: time="2025-02-13T19:20:12.388788449Z" level=info msg="CreateContainer within sandbox \"82d002b626391b17fb4dbdfe233fde10d9da40a5de2bb0607df3160b4f73947e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"717e875c0fce73eccaa781e08059c978796c9e13f4e9065e8f9ad57c009d866f\"" Feb 13 19:20:12.389529 containerd[1509]: time="2025-02-13T19:20:12.389285251Z" level=info msg="StartContainer for \"717e875c0fce73eccaa781e08059c978796c9e13f4e9065e8f9ad57c009d866f\"" Feb 13 19:20:12.426206 systemd[1]: Started cri-containerd-717e875c0fce73eccaa781e08059c978796c9e13f4e9065e8f9ad57c009d866f.scope - libcontainer container 717e875c0fce73eccaa781e08059c978796c9e13f4e9065e8f9ad57c009d866f. Feb 13 19:20:12.465760 containerd[1509]: time="2025-02-13T19:20:12.465722972Z" level=info msg="StartContainer for \"717e875c0fce73eccaa781e08059c978796c9e13f4e9065e8f9ad57c009d866f\" returns successfully" Feb 13 19:20:14.211597 kubelet[2625]: I0213 19:20:14.211530 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd559d499-qrdn4" podStartSLOduration=28.493373609 podStartE2EDuration="36.211510364s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:20:04.549992775 +0000 UTC m=+39.367788784" lastFinishedPulling="2025-02-13 19:20:12.26812951 +0000 UTC m=+47.085925539" observedRunningTime="2025-02-13 19:20:12.954673374 +0000 UTC m=+47.772469393" watchObservedRunningTime="2025-02-13 19:20:14.211510364 +0000 UTC m=+49.029306373" Feb 13 19:20:15.038234 containerd[1509]: time="2025-02-13T19:20:15.038150165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:15.045323 containerd[1509]: time="2025-02-13T19:20:15.045276110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:20:15.055569 containerd[1509]: time="2025-02-13T19:20:15.055522863Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:15.071502 containerd[1509]: time="2025-02-13T19:20:15.071452022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:15.072235 containerd[1509]: time="2025-02-13T19:20:15.072191370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.80389613s" Feb 13 19:20:15.072280 containerd[1509]: time="2025-02-13T19:20:15.072237877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:20:15.073286 containerd[1509]: time="2025-02-13T19:20:15.073247342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:20:15.074221 containerd[1509]: time="2025-02-13T19:20:15.074195411Z" level=info msg="CreateContainer within sandbox \"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:20:15.208185 containerd[1509]: time="2025-02-13T19:20:15.208129326Z" level=info msg="CreateContainer within sandbox \"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"36cdde330020802cfe3deb0044d1c77c375728b3a373da5e9657567be7f368c1\"" Feb 13 19:20:15.208743 containerd[1509]: time="2025-02-13T19:20:15.208693915Z" level=info msg="StartContainer for \"36cdde330020802cfe3deb0044d1c77c375728b3a373da5e9657567be7f368c1\"" Feb 13 19:20:15.239082 systemd[1]: Started cri-containerd-36cdde330020802cfe3deb0044d1c77c375728b3a373da5e9657567be7f368c1.scope - libcontainer container 36cdde330020802cfe3deb0044d1c77c375728b3a373da5e9657567be7f368c1. Feb 13 19:20:15.423665 containerd[1509]: time="2025-02-13T19:20:15.423523471Z" level=info msg="StartContainer for \"36cdde330020802cfe3deb0044d1c77c375728b3a373da5e9657567be7f368c1\" returns successfully" Feb 13 19:20:15.679957 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:58420.service - OpenSSH per-connection server daemon (10.0.0.1:58420). Feb 13 19:20:15.711186 containerd[1509]: time="2025-02-13T19:20:15.711142923Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:15.711996 containerd[1509]: time="2025-02-13T19:20:15.711958023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:20:15.714149 containerd[1509]: time="2025-02-13T19:20:15.714109592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 640.823868ms" Feb 13 19:20:15.714149 containerd[1509]: time="2025-02-13T19:20:15.714140079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:20:15.715297 containerd[1509]: time="2025-02-13T19:20:15.715141789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:20:15.716336 containerd[1509]: time="2025-02-13T19:20:15.716274254Z" level=info msg="CreateContainer within sandbox \"98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:20:15.733379 containerd[1509]: time="2025-02-13T19:20:15.733333245Z" level=info msg="CreateContainer within sandbox \"98fa0f8e1459bac234da1b0907f0ad913a5e29975a4e44a035e2279c2ee7f752\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1a22246b79917a99d012e159ac24976004583ddfc7457133bde81320ff24068\"" Feb 13 19:20:15.734078 containerd[1509]: time="2025-02-13T19:20:15.734036845Z" level=info msg="StartContainer for \"f1a22246b79917a99d012e159ac24976004583ddfc7457133bde81320ff24068\"" Feb 13 19:20:15.743950 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:15.745703 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:15.750794 systemd-logind[1499]: New session 12 of user core. Feb 13 19:20:15.761102 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:20:15.764696 systemd[1]: Started cri-containerd-f1a22246b79917a99d012e159ac24976004583ddfc7457133bde81320ff24068.scope - libcontainer container f1a22246b79917a99d012e159ac24976004583ddfc7457133bde81320ff24068. Feb 13 19:20:15.806691 containerd[1509]: time="2025-02-13T19:20:15.806647575Z" level=info msg="StartContainer for \"f1a22246b79917a99d012e159ac24976004583ddfc7457133bde81320ff24068\" returns successfully" Feb 13 19:20:15.894833 sshd[5918]: Connection closed by 10.0.0.1 port 58420 Feb 13 19:20:15.896580 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:15.905882 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:58420.service: Deactivated successfully. Feb 13 19:20:15.909356 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:20:15.911537 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:20:15.922698 kubelet[2625]: I0213 19:20:15.922631 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd559d499-bldmf" podStartSLOduration=27.093209113 podStartE2EDuration="37.922614555s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:20:04.885565436 +0000 UTC m=+39.703361445" lastFinishedPulling="2025-02-13 19:20:15.714970868 +0000 UTC m=+50.532766887" observedRunningTime="2025-02-13 19:20:15.922124966 +0000 UTC m=+50.739920985" watchObservedRunningTime="2025-02-13 19:20:15.922614555 +0000 UTC m=+50.740410564" Feb 13 19:20:15.923323 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:58432.service - OpenSSH per-connection server daemon (10.0.0.1:58432). Feb 13 19:20:15.928052 systemd-logind[1499]: Removed session 12. Feb 13 19:20:15.967319 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 58432 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:15.969047 sshd-session[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:15.973596 systemd-logind[1499]: New session 13 of user core. Feb 13 19:20:15.981182 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:20:16.139211 sshd[5960]: Connection closed by 10.0.0.1 port 58432 Feb 13 19:20:16.141181 sshd-session[5954]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:16.155601 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:58432.service: Deactivated successfully. Feb 13 19:20:16.157641 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:20:16.158512 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:20:16.166241 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Feb 13 19:20:16.167446 systemd-logind[1499]: Removed session 13. Feb 13 19:20:16.297330 sshd[5970]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:16.298739 sshd-session[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:16.303171 systemd-logind[1499]: New session 14 of user core. Feb 13 19:20:16.315052 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:20:16.501186 sshd[5973]: Connection closed by 10.0.0.1 port 58436 Feb 13 19:20:16.501564 sshd-session[5970]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:16.505923 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:58436.service: Deactivated successfully. Feb 13 19:20:16.508234 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:20:16.508919 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:20:16.509834 systemd-logind[1499]: Removed session 14. Feb 13 19:20:18.589109 containerd[1509]: time="2025-02-13T19:20:18.588981186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:18.590681 containerd[1509]: time="2025-02-13T19:20:18.590644257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:20:18.592209 containerd[1509]: time="2025-02-13T19:20:18.592155834Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:18.594695 containerd[1509]: time="2025-02-13T19:20:18.594634846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:20:18.595251 containerd[1509]: time="2025-02-13T19:20:18.595224402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.880055853s" Feb 13 19:20:18.595297 containerd[1509]: time="2025-02-13T19:20:18.595252405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:20:18.597303 containerd[1509]: time="2025-02-13T19:20:18.597277395Z" level=info msg="CreateContainer within sandbox \"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:20:18.616576 containerd[1509]: time="2025-02-13T19:20:18.616533433Z" level=info msg="CreateContainer within sandbox \"0940bcf9a5535c365651b41b16c92933c4f30115ea63f7b420405db5872b594d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6ba6ac0ba8a58ca6c2ceb140e81d7294b936c3cc3184fa8d5b4d608fb604459\"" Feb 13 19:20:18.617069 containerd[1509]: time="2025-02-13T19:20:18.617039393Z" level=info msg="StartContainer for \"d6ba6ac0ba8a58ca6c2ceb140e81d7294b936c3cc3184fa8d5b4d608fb604459\"" Feb 13 19:20:18.658100 systemd[1]: Started cri-containerd-d6ba6ac0ba8a58ca6c2ceb140e81d7294b936c3cc3184fa8d5b4d608fb604459.scope - libcontainer container d6ba6ac0ba8a58ca6c2ceb140e81d7294b936c3cc3184fa8d5b4d608fb604459. Feb 13 19:20:18.720112 containerd[1509]: time="2025-02-13T19:20:18.720056777Z" level=info msg="StartContainer for \"d6ba6ac0ba8a58ca6c2ceb140e81d7294b936c3cc3184fa8d5b4d608fb604459\" returns successfully" Feb 13 19:20:19.325291 kubelet[2625]: I0213 19:20:19.325242 2625 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:20:19.325291 kubelet[2625]: I0213 19:20:19.325286 2625 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:20:20.602815 kubelet[2625]: I0213 19:20:20.602748 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:20:20.771403 kubelet[2625]: I0213 19:20:20.771338 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gj6hs" podStartSLOduration=28.811486267 podStartE2EDuration="42.771320176s" podCreationTimestamp="2025-02-13 19:19:38 +0000 UTC" firstStartedPulling="2025-02-13 19:20:04.636209691 +0000 UTC m=+39.454005700" lastFinishedPulling="2025-02-13 19:20:18.5960436 +0000 UTC m=+53.413839609" observedRunningTime="2025-02-13 19:20:18.97036359 +0000 UTC m=+53.788159599" watchObservedRunningTime="2025-02-13 19:20:20.771320176 +0000 UTC m=+55.589116185" Feb 13 19:20:21.516830 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:37274.service - OpenSSH per-connection server daemon (10.0.0.1:37274). Feb 13 19:20:21.574546 sshd[6090]: Accepted publickey for core from 10.0.0.1 port 37274 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:21.576165 sshd-session[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:21.580635 systemd-logind[1499]: New session 15 of user core. Feb 13 19:20:21.587049 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:20:21.745161 sshd[6092]: Connection closed by 10.0.0.1 port 37274 Feb 13 19:20:21.745491 sshd-session[6090]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:21.750089 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:37274.service: Deactivated successfully. Feb 13 19:20:21.752856 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:20:21.753680 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:20:21.754639 systemd-logind[1499]: Removed session 15. Feb 13 19:20:25.254647 containerd[1509]: time="2025-02-13T19:20:25.254602806Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:20:25.255178 containerd[1509]: time="2025-02-13T19:20:25.254726799Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:20:25.255178 containerd[1509]: time="2025-02-13T19:20:25.254736627Z" level=info msg="StopPodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:20:25.260711 containerd[1509]: time="2025-02-13T19:20:25.260684236Z" level=info msg="RemovePodSandbox for \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:20:25.272808 containerd[1509]: time="2025-02-13T19:20:25.272745826Z" level=info msg="Forcibly stopping sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\"" Feb 13 19:20:25.272973 containerd[1509]: time="2025-02-13T19:20:25.272896159Z" level=info msg="TearDown network for sandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" successfully" Feb 13 19:20:25.392609 containerd[1509]: time="2025-02-13T19:20:25.392550135Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.392764 containerd[1509]: time="2025-02-13T19:20:25.392631738Z" level=info msg="RemovePodSandbox \"9dea37e66f5c61ae3c507931ff52147e2132904adcfbba5e2faed571771da6f0\" returns successfully" Feb 13 19:20:25.393271 containerd[1509]: time="2025-02-13T19:20:25.393242113Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:20:25.393397 containerd[1509]: time="2025-02-13T19:20:25.393372578Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:20:25.393397 containerd[1509]: time="2025-02-13T19:20:25.393390552Z" level=info msg="StopPodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:20:25.393801 containerd[1509]: time="2025-02-13T19:20:25.393766598Z" level=info msg="RemovePodSandbox for \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:20:25.393801 containerd[1509]: time="2025-02-13T19:20:25.393798718Z" level=info msg="Forcibly stopping sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\"" Feb 13 19:20:25.393922 containerd[1509]: time="2025-02-13T19:20:25.393875672Z" level=info msg="TearDown network for sandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" successfully" Feb 13 19:20:25.458271 containerd[1509]: time="2025-02-13T19:20:25.458195691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.458434 containerd[1509]: time="2025-02-13T19:20:25.458281582Z" level=info msg="RemovePodSandbox \"fafca30b7de124b4acbe9abc71b86e73668cf9ac603156e1fc886edc85c9ac1e\" returns successfully" Feb 13 19:20:25.458842 containerd[1509]: time="2025-02-13T19:20:25.458778825Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:20:25.459019 containerd[1509]: time="2025-02-13T19:20:25.458976105Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:20:25.459019 containerd[1509]: time="2025-02-13T19:20:25.458991704Z" level=info msg="StopPodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:20:25.459352 containerd[1509]: time="2025-02-13T19:20:25.459320041Z" level=info msg="RemovePodSandbox for \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:20:25.459352 containerd[1509]: time="2025-02-13T19:20:25.459342643Z" level=info msg="Forcibly stopping sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\"" Feb 13 19:20:25.459481 containerd[1509]: time="2025-02-13T19:20:25.459408968Z" level=info msg="TearDown network for sandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" successfully" Feb 13 19:20:25.488090 containerd[1509]: time="2025-02-13T19:20:25.488020729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.488246 containerd[1509]: time="2025-02-13T19:20:25.488105147Z" level=info msg="RemovePodSandbox \"8c1589494c4827d8eb04ebe04fdeb51b485c642ef1e1d46a9091075978afdbeb\" returns successfully" Feb 13 19:20:25.488586 containerd[1509]: time="2025-02-13T19:20:25.488554461Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:20:25.488695 containerd[1509]: time="2025-02-13T19:20:25.488675878Z" level=info msg="TearDown network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" successfully" Feb 13 19:20:25.488749 containerd[1509]: time="2025-02-13T19:20:25.488692850Z" level=info msg="StopPodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" returns successfully" Feb 13 19:20:25.489134 containerd[1509]: time="2025-02-13T19:20:25.489098351Z" level=info msg="RemovePodSandbox for \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:20:25.489184 containerd[1509]: time="2025-02-13T19:20:25.489138847Z" level=info msg="Forcibly stopping sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\"" Feb 13 19:20:25.489286 containerd[1509]: time="2025-02-13T19:20:25.489233424Z" level=info msg="TearDown network for sandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" successfully" Feb 13 19:20:25.516561 containerd[1509]: time="2025-02-13T19:20:25.516435121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.516561 containerd[1509]: time="2025-02-13T19:20:25.516502627Z" level=info msg="RemovePodSandbox \"a4fcccae5f9c1db7a65498f17f859b9668d6b4695808459c7dc225631e602604\" returns successfully" Feb 13 19:20:25.516995 containerd[1509]: time="2025-02-13T19:20:25.516943134Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" Feb 13 19:20:25.517087 containerd[1509]: time="2025-02-13T19:20:25.517047299Z" level=info msg="TearDown network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" successfully" Feb 13 19:20:25.517087 containerd[1509]: time="2025-02-13T19:20:25.517057538Z" level=info msg="StopPodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" returns successfully" Feb 13 19:20:25.517270 containerd[1509]: time="2025-02-13T19:20:25.517253026Z" level=info msg="RemovePodSandbox for \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" Feb 13 19:20:25.517270 containerd[1509]: time="2025-02-13T19:20:25.517272943Z" level=info msg="Forcibly stopping sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\"" Feb 13 19:20:25.517378 containerd[1509]: time="2025-02-13T19:20:25.517341051Z" level=info msg="TearDown network for sandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" successfully" Feb 13 19:20:25.553754 containerd[1509]: time="2025-02-13T19:20:25.553706861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.553833 containerd[1509]: time="2025-02-13T19:20:25.553776171Z" level=info msg="RemovePodSandbox \"0bf68a5d11253d0868d34643809cb8251eecb54dc7b452b447d235fe2cd6a8ed\" returns successfully" Feb 13 19:20:25.554190 containerd[1509]: time="2025-02-13T19:20:25.554145072Z" level=info msg="StopPodSandbox for \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\"" Feb 13 19:20:25.554234 containerd[1509]: time="2025-02-13T19:20:25.554226565Z" level=info msg="TearDown network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" successfully" Feb 13 19:20:25.554263 containerd[1509]: time="2025-02-13T19:20:25.554236514Z" level=info msg="StopPodSandbox for \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" returns successfully" Feb 13 19:20:25.554512 containerd[1509]: time="2025-02-13T19:20:25.554472618Z" level=info msg="RemovePodSandbox for \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\"" Feb 13 19:20:25.554512 containerd[1509]: time="2025-02-13T19:20:25.554493827Z" level=info msg="Forcibly stopping sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\"" Feb 13 19:20:25.554669 containerd[1509]: time="2025-02-13T19:20:25.554554752Z" level=info msg="TearDown network for sandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" successfully" Feb 13 19:20:25.561692 containerd[1509]: time="2025-02-13T19:20:25.561623454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.561749 containerd[1509]: time="2025-02-13T19:20:25.561710668Z" level=info msg="RemovePodSandbox \"aec466edfd31ae3f7e8e6de1d2d38e7c8772b64ad1b16b6b9ad990fc0faf75fb\" returns successfully" Feb 13 19:20:25.562219 containerd[1509]: time="2025-02-13T19:20:25.562183554Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:20:25.562362 containerd[1509]: time="2025-02-13T19:20:25.562316594Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:20:25.562362 containerd[1509]: time="2025-02-13T19:20:25.562331141Z" level=info msg="StopPodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:20:25.562702 containerd[1509]: time="2025-02-13T19:20:25.562674336Z" level=info msg="RemovePodSandbox for \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:20:25.562766 containerd[1509]: time="2025-02-13T19:20:25.562703931Z" level=info msg="Forcibly stopping sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\"" Feb 13 19:20:25.562846 containerd[1509]: time="2025-02-13T19:20:25.562801083Z" level=info msg="TearDown network for sandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" successfully" Feb 13 19:20:25.595079 containerd[1509]: time="2025-02-13T19:20:25.595018671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.595210 containerd[1509]: time="2025-02-13T19:20:25.595104231Z" level=info msg="RemovePodSandbox \"e8ca59a891e130f703ed305dd39868527a9d436f42cce2cfcb0ab37b010ab80d\" returns successfully" Feb 13 19:20:25.595690 containerd[1509]: time="2025-02-13T19:20:25.595651107Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:20:25.595880 containerd[1509]: time="2025-02-13T19:20:25.595817770Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:20:25.595880 containerd[1509]: time="2025-02-13T19:20:25.595871351Z" level=info msg="StopPodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:20:25.596320 containerd[1509]: time="2025-02-13T19:20:25.596298021Z" level=info msg="RemovePodSandbox for \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:20:25.596375 containerd[1509]: time="2025-02-13T19:20:25.596323058Z" level=info msg="Forcibly stopping sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\"" Feb 13 19:20:25.596466 containerd[1509]: time="2025-02-13T19:20:25.596417746Z" level=info msg="TearDown network for sandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" successfully" Feb 13 19:20:25.643662 containerd[1509]: time="2025-02-13T19:20:25.643581485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.643662 containerd[1509]: time="2025-02-13T19:20:25.643670452Z" level=info msg="RemovePodSandbox \"5a66790b645f364d7534d92aa516250a90696bc82f1b424dbd4af78e2f8b1e79\" returns successfully" Feb 13 19:20:25.644311 containerd[1509]: time="2025-02-13T19:20:25.644271269Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:20:25.644508 containerd[1509]: time="2025-02-13T19:20:25.644427833Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:20:25.644508 containerd[1509]: time="2025-02-13T19:20:25.644443742Z" level=info msg="StopPodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:20:25.644780 containerd[1509]: time="2025-02-13T19:20:25.644744728Z" level=info msg="RemovePodSandbox for \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:20:25.644822 containerd[1509]: time="2025-02-13T19:20:25.644778962Z" level=info msg="Forcibly stopping sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\"" Feb 13 19:20:25.644946 containerd[1509]: time="2025-02-13T19:20:25.644887646Z" level=info msg="TearDown network for sandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" successfully" Feb 13 19:20:25.662300 containerd[1509]: time="2025-02-13T19:20:25.662234512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.662387 containerd[1509]: time="2025-02-13T19:20:25.662321706Z" level=info msg="RemovePodSandbox \"db5d56497fd3262defbb393152e6b0ff1e00118570b6f2c732dd3da3f94b6453\" returns successfully" Feb 13 19:20:25.662878 containerd[1509]: time="2025-02-13T19:20:25.662843044Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:20:25.663020 containerd[1509]: time="2025-02-13T19:20:25.662998034Z" level=info msg="TearDown network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" successfully" Feb 13 19:20:25.663020 containerd[1509]: time="2025-02-13T19:20:25.663011610Z" level=info msg="StopPodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" returns successfully" Feb 13 19:20:25.663377 containerd[1509]: time="2025-02-13T19:20:25.663350376Z" level=info msg="RemovePodSandbox for \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:20:25.663433 containerd[1509]: time="2025-02-13T19:20:25.663376946Z" level=info msg="Forcibly stopping sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\"" Feb 13 19:20:25.663492 containerd[1509]: time="2025-02-13T19:20:25.663450193Z" level=info msg="TearDown network for sandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" successfully" Feb 13 19:20:25.667818 containerd[1509]: time="2025-02-13T19:20:25.667758066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.667865 containerd[1509]: time="2025-02-13T19:20:25.667824390Z" level=info msg="RemovePodSandbox \"428d7e52e22e7fd81bd8999de7a26309f4556bc1db819f0760d353d288fabde9\" returns successfully" Feb 13 19:20:25.668182 containerd[1509]: time="2025-02-13T19:20:25.668144721Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" Feb 13 19:20:25.668276 containerd[1509]: time="2025-02-13T19:20:25.668255048Z" level=info msg="TearDown network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" successfully" Feb 13 19:20:25.668276 containerd[1509]: time="2025-02-13T19:20:25.668273222Z" level=info msg="StopPodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" returns successfully" Feb 13 19:20:25.668561 containerd[1509]: time="2025-02-13T19:20:25.668530986Z" level=info msg="RemovePodSandbox for \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" Feb 13 19:20:25.668603 containerd[1509]: time="2025-02-13T19:20:25.668566713Z" level=info msg="Forcibly stopping sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\"" Feb 13 19:20:25.668696 containerd[1509]: time="2025-02-13T19:20:25.668655138Z" level=info msg="TearDown network for sandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" successfully" Feb 13 19:20:25.672470 containerd[1509]: time="2025-02-13T19:20:25.672434420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.672551 containerd[1509]: time="2025-02-13T19:20:25.672482610Z" level=info msg="RemovePodSandbox \"0f7c5d3148b22fef5bc29d10991b482dd62983c7cef9d874e30deda1c16b6ea4\" returns successfully" Feb 13 19:20:25.672846 containerd[1509]: time="2025-02-13T19:20:25.672825333Z" level=info msg="StopPodSandbox for \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\"" Feb 13 19:20:25.672919 containerd[1509]: time="2025-02-13T19:20:25.672907146Z" level=info msg="TearDown network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" successfully" Feb 13 19:20:25.672959 containerd[1509]: time="2025-02-13T19:20:25.672919259Z" level=info msg="StopPodSandbox for \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" returns successfully" Feb 13 19:20:25.673954 containerd[1509]: time="2025-02-13T19:20:25.673203542Z" level=info msg="RemovePodSandbox for \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\"" Feb 13 19:20:25.673954 containerd[1509]: time="2025-02-13T19:20:25.673224722Z" level=info msg="Forcibly stopping sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\"" Feb 13 19:20:25.673954 containerd[1509]: time="2025-02-13T19:20:25.673290375Z" level=info msg="TearDown network for sandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" successfully" Feb 13 19:20:25.676907 containerd[1509]: time="2025-02-13T19:20:25.676865012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.676907 containerd[1509]: time="2025-02-13T19:20:25.676900959Z" level=info msg="RemovePodSandbox \"96c471c99ec681c8625e12247456b1452789f90a1ba3aae499c913468fc3e15d\" returns successfully" Feb 13 19:20:25.677246 containerd[1509]: time="2025-02-13T19:20:25.677214378Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:20:25.677313 containerd[1509]: time="2025-02-13T19:20:25.677297243Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:20:25.677313 containerd[1509]: time="2025-02-13T19:20:25.677308314Z" level=info msg="StopPodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:20:25.677515 containerd[1509]: time="2025-02-13T19:20:25.677492369Z" level=info msg="RemovePodSandbox for \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:20:25.677515 containerd[1509]: time="2025-02-13T19:20:25.677512497Z" level=info msg="Forcibly stopping sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\"" Feb 13 19:20:25.677595 containerd[1509]: time="2025-02-13T19:20:25.677570796Z" level=info msg="TearDown network for sandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" successfully" Feb 13 19:20:25.681293 containerd[1509]: time="2025-02-13T19:20:25.681252043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.681352 containerd[1509]: time="2025-02-13T19:20:25.681317045Z" level=info msg="RemovePodSandbox \"a0057fbff34d02e6950b31646cf72b9f2b10af43a5bfa5a0a096a9d2d73ffbf9\" returns successfully" Feb 13 19:20:25.681647 containerd[1509]: time="2025-02-13T19:20:25.681619213Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:20:25.681729 containerd[1509]: time="2025-02-13T19:20:25.681699503Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:20:25.681729 containerd[1509]: time="2025-02-13T19:20:25.681712888Z" level=info msg="StopPodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:20:25.682088 containerd[1509]: time="2025-02-13T19:20:25.682048858Z" level=info msg="RemovePodSandbox for \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:20:25.682088 containerd[1509]: time="2025-02-13T19:20:25.682090697Z" level=info msg="Forcibly stopping sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\"" Feb 13 19:20:25.682260 containerd[1509]: time="2025-02-13T19:20:25.682183571Z" level=info msg="TearDown network for sandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" successfully" Feb 13 19:20:25.686164 containerd[1509]: time="2025-02-13T19:20:25.686130566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.686223 containerd[1509]: time="2025-02-13T19:20:25.686175992Z" level=info msg="RemovePodSandbox \"112d91f28499874bd45ef7d9df944d7ab3e21f333bdf9c64318968bf7bd17c0d\" returns successfully" Feb 13 19:20:25.686542 containerd[1509]: time="2025-02-13T19:20:25.686505119Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:20:25.686670 containerd[1509]: time="2025-02-13T19:20:25.686642567Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:20:25.686670 containerd[1509]: time="2025-02-13T19:20:25.686664058Z" level=info msg="StopPodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:20:25.687031 containerd[1509]: time="2025-02-13T19:20:25.686985691Z" level=info msg="RemovePodSandbox for \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:20:25.687031 containerd[1509]: time="2025-02-13T19:20:25.687019985Z" level=info msg="Forcibly stopping sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\"" Feb 13 19:20:25.687151 containerd[1509]: time="2025-02-13T19:20:25.687101929Z" level=info msg="TearDown network for sandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" successfully" Feb 13 19:20:25.691141 containerd[1509]: time="2025-02-13T19:20:25.691106823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.691203 containerd[1509]: time="2025-02-13T19:20:25.691155324Z" level=info msg="RemovePodSandbox \"3317ad8406f80861889f892c01b085688467213901969afa87eca5f07b518155\" returns successfully" Feb 13 19:20:25.691474 containerd[1509]: time="2025-02-13T19:20:25.691443946Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:20:25.691604 containerd[1509]: time="2025-02-13T19:20:25.691572306Z" level=info msg="TearDown network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" successfully" Feb 13 19:20:25.691604 containerd[1509]: time="2025-02-13T19:20:25.691595610Z" level=info msg="StopPodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" returns successfully" Feb 13 19:20:25.692014 containerd[1509]: time="2025-02-13T19:20:25.691987795Z" level=info msg="RemovePodSandbox for \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:20:25.692059 containerd[1509]: time="2025-02-13T19:20:25.692021248Z" level=info msg="Forcibly stopping sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\"" Feb 13 19:20:25.692150 containerd[1509]: time="2025-02-13T19:20:25.692113512Z" level=info msg="TearDown network for sandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" successfully" Feb 13 19:20:25.696497 containerd[1509]: time="2025-02-13T19:20:25.696455548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.696543 containerd[1509]: time="2025-02-13T19:20:25.696500152Z" level=info msg="RemovePodSandbox \"809a96cc195ebe226446d3ce4b159f4046bef8c112465278b341459d48e44008\" returns successfully" Feb 13 19:20:25.696973 containerd[1509]: time="2025-02-13T19:20:25.696760591Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" Feb 13 19:20:25.696973 containerd[1509]: time="2025-02-13T19:20:25.696885535Z" level=info msg="TearDown network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" successfully" Feb 13 19:20:25.696973 containerd[1509]: time="2025-02-13T19:20:25.696898670Z" level=info msg="StopPodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" returns successfully" Feb 13 19:20:25.697293 containerd[1509]: time="2025-02-13T19:20:25.697274765Z" level=info msg="RemovePodSandbox for \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" Feb 13 19:20:25.697326 containerd[1509]: time="2025-02-13T19:20:25.697294452Z" level=info msg="Forcibly stopping sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\"" Feb 13 19:20:25.697391 containerd[1509]: time="2025-02-13T19:20:25.697357581Z" level=info msg="TearDown network for sandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" successfully" Feb 13 19:20:25.701210 containerd[1509]: time="2025-02-13T19:20:25.701169573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.701210 containerd[1509]: time="2025-02-13T19:20:25.701215639Z" level=info msg="RemovePodSandbox \"19a37da2ed95445e84cb6e3a0a5bc9819fd44381273af2c9469fd3cfd98faa0e\" returns successfully" Feb 13 19:20:25.701644 containerd[1509]: time="2025-02-13T19:20:25.701610460Z" level=info msg="StopPodSandbox for \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\"" Feb 13 19:20:25.701773 containerd[1509]: time="2025-02-13T19:20:25.701747768Z" level=info msg="TearDown network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" successfully" Feb 13 19:20:25.701773 containerd[1509]: time="2025-02-13T19:20:25.701764920Z" level=info msg="StopPodSandbox for \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" returns successfully" Feb 13 19:20:25.702056 containerd[1509]: time="2025-02-13T19:20:25.702018826Z" level=info msg="RemovePodSandbox for \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\"" Feb 13 19:20:25.702056 containerd[1509]: time="2025-02-13T19:20:25.702045576Z" level=info msg="Forcibly stopping sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\"" Feb 13 19:20:25.702162 containerd[1509]: time="2025-02-13T19:20:25.702114786Z" level=info msg="TearDown network for sandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" successfully" Feb 13 19:20:25.705817 containerd[1509]: time="2025-02-13T19:20:25.705771567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.705860 containerd[1509]: time="2025-02-13T19:20:25.705825598Z" level=info msg="RemovePodSandbox \"623fe5bb6a6bbf6aec8ba354c6e8a363a4febd9822370e88194feb024a48e86e\" returns successfully" Feb 13 19:20:25.706130 containerd[1509]: time="2025-02-13T19:20:25.706100565Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:20:25.706251 containerd[1509]: time="2025-02-13T19:20:25.706222964Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:20:25.706251 containerd[1509]: time="2025-02-13T19:20:25.706244625Z" level=info msg="StopPodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:20:25.706510 containerd[1509]: time="2025-02-13T19:20:25.706478223Z" level=info msg="RemovePodSandbox for \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:20:25.706510 containerd[1509]: time="2025-02-13T19:20:25.706506356Z" level=info msg="Forcibly stopping sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\"" Feb 13 19:20:25.706625 containerd[1509]: time="2025-02-13T19:20:25.706581046Z" level=info msg="TearDown network for sandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" successfully" Feb 13 19:20:25.710570 containerd[1509]: time="2025-02-13T19:20:25.710522952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.710570 containerd[1509]: time="2025-02-13T19:20:25.710574880Z" level=info msg="RemovePodSandbox \"012149d218be0b84457b1b9965c69b762db4242eab07ccaab2071962f38aede7\" returns successfully" Feb 13 19:20:25.710866 containerd[1509]: time="2025-02-13T19:20:25.710835559Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:20:25.710947 containerd[1509]: time="2025-02-13T19:20:25.710916651Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:20:25.710983 containerd[1509]: time="2025-02-13T19:20:25.710926339Z" level=info msg="StopPodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:20:25.711216 containerd[1509]: time="2025-02-13T19:20:25.711182329Z" level=info msg="RemovePodSandbox for \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:20:25.711216 containerd[1509]: time="2025-02-13T19:20:25.711205893Z" level=info msg="Forcibly stopping sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\"" Feb 13 19:20:25.711313 containerd[1509]: time="2025-02-13T19:20:25.711276826Z" level=info msg="TearDown network for sandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" successfully" Feb 13 19:20:25.715528 containerd[1509]: time="2025-02-13T19:20:25.715484792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.715579 containerd[1509]: time="2025-02-13T19:20:25.715548862Z" level=info msg="RemovePodSandbox \"a4d6d2f8374393d8ee5a03bb27ea050b541beba285512c7c8f33ce7bef4fa157\" returns successfully" Feb 13 19:20:25.715959 containerd[1509]: time="2025-02-13T19:20:25.715918265Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:20:25.716142 containerd[1509]: time="2025-02-13T19:20:25.716109293Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:20:25.716142 containerd[1509]: time="2025-02-13T19:20:25.716129881Z" level=info msg="StopPodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:20:25.716834 containerd[1509]: time="2025-02-13T19:20:25.716501900Z" level=info msg="RemovePodSandbox for \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:20:25.716834 containerd[1509]: time="2025-02-13T19:20:25.716530664Z" level=info msg="Forcibly stopping sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\"" Feb 13 19:20:25.716834 containerd[1509]: time="2025-02-13T19:20:25.716610343Z" level=info msg="TearDown network for sandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" successfully" Feb 13 19:20:25.721226 containerd[1509]: time="2025-02-13T19:20:25.721186449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.721226 containerd[1509]: time="2025-02-13T19:20:25.721237375Z" level=info msg="RemovePodSandbox \"a46084b21d0b691f3454ac2037ac7f038aee52b21e350929b94eac3d2155a54a\" returns successfully" Feb 13 19:20:25.721663 containerd[1509]: time="2025-02-13T19:20:25.721622647Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:20:25.721790 containerd[1509]: time="2025-02-13T19:20:25.721761047Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:20:25.721790 containerd[1509]: time="2025-02-13T19:20:25.721775564Z" level=info msg="StopPodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:20:25.722080 containerd[1509]: time="2025-02-13T19:20:25.722053887Z" level=info msg="RemovePodSandbox for \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:20:25.722080 containerd[1509]: time="2025-02-13T19:20:25.722075868Z" level=info msg="Forcibly stopping sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\"" Feb 13 19:20:25.722198 containerd[1509]: time="2025-02-13T19:20:25.722140610Z" level=info msg="TearDown network for sandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" successfully" Feb 13 19:20:25.726226 containerd[1509]: time="2025-02-13T19:20:25.726198493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.726288 containerd[1509]: time="2025-02-13T19:20:25.726246803Z" level=info msg="RemovePodSandbox \"09bf7329edab9752930a96fa50b6a0ec8171f4569ecca9fbec1f9e68e2af4244\" returns successfully" Feb 13 19:20:25.726883 containerd[1509]: time="2025-02-13T19:20:25.726652384Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:20:25.726883 containerd[1509]: time="2025-02-13T19:20:25.726769394Z" level=info msg="TearDown network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" successfully" Feb 13 19:20:25.726883 containerd[1509]: time="2025-02-13T19:20:25.726797136Z" level=info msg="StopPodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" returns successfully" Feb 13 19:20:25.727310 containerd[1509]: time="2025-02-13T19:20:25.727273900Z" level=info msg="RemovePodSandbox for \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:20:25.727380 containerd[1509]: time="2025-02-13T19:20:25.727316300Z" level=info msg="Forcibly stopping sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\"" Feb 13 19:20:25.727482 containerd[1509]: time="2025-02-13T19:20:25.727426617Z" level=info msg="TearDown network for sandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" successfully" Feb 13 19:20:25.731743 containerd[1509]: time="2025-02-13T19:20:25.731707078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.731874 containerd[1509]: time="2025-02-13T19:20:25.731756992Z" level=info msg="RemovePodSandbox \"dfb43416336efb54ca2974e783c08bdef39f6d1537348c46ff62442abdae62e3\" returns successfully" Feb 13 19:20:25.732189 containerd[1509]: time="2025-02-13T19:20:25.732163284Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" Feb 13 19:20:25.732272 containerd[1509]: time="2025-02-13T19:20:25.732248824Z" level=info msg="TearDown network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" successfully" Feb 13 19:20:25.732272 containerd[1509]: time="2025-02-13T19:20:25.732263912Z" level=info msg="StopPodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" returns successfully" Feb 13 19:20:25.732624 containerd[1509]: time="2025-02-13T19:20:25.732600725Z" level=info msg="RemovePodSandbox for \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" Feb 13 19:20:25.732624 containerd[1509]: time="2025-02-13T19:20:25.732620743Z" level=info msg="Forcibly stopping sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\"" Feb 13 19:20:25.732725 containerd[1509]: time="2025-02-13T19:20:25.732681677Z" level=info msg="TearDown network for sandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" successfully" Feb 13 19:20:25.736561 containerd[1509]: time="2025-02-13T19:20:25.736511743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.736561 containerd[1509]: time="2025-02-13T19:20:25.736567898Z" level=info msg="RemovePodSandbox \"6b080aa98b5c6f4fb5af57f78930df1d6f625a1349cc1a3f16cb1824e4aec2f1\" returns successfully" Feb 13 19:20:25.737122 containerd[1509]: time="2025-02-13T19:20:25.736970733Z" level=info msg="StopPodSandbox for \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\"" Feb 13 19:20:25.737122 containerd[1509]: time="2025-02-13T19:20:25.737061373Z" level=info msg="TearDown network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" successfully" Feb 13 19:20:25.737122 containerd[1509]: time="2025-02-13T19:20:25.737071743Z" level=info msg="StopPodSandbox for \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" returns successfully" Feb 13 19:20:25.737426 containerd[1509]: time="2025-02-13T19:20:25.737396383Z" level=info msg="RemovePodSandbox for \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\"" Feb 13 19:20:25.737426 containerd[1509]: time="2025-02-13T19:20:25.737419185Z" level=info msg="Forcibly stopping sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\"" Feb 13 19:20:25.737533 containerd[1509]: time="2025-02-13T19:20:25.737482755Z" level=info msg="TearDown network for sandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" successfully" Feb 13 19:20:25.741437 containerd[1509]: time="2025-02-13T19:20:25.741399844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.741499 containerd[1509]: time="2025-02-13T19:20:25.741451340Z" level=info msg="RemovePodSandbox \"b4eb5748a92b8b3578babb59a374cf5acb5c6d711b70c8448583913e6f7e3420\" returns successfully" Feb 13 19:20:25.741790 containerd[1509]: time="2025-02-13T19:20:25.741759920Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:20:25.741890 containerd[1509]: time="2025-02-13T19:20:25.741871760Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:20:25.741890 containerd[1509]: time="2025-02-13T19:20:25.741886678Z" level=info msg="StopPodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:20:25.742193 containerd[1509]: time="2025-02-13T19:20:25.742168085Z" level=info msg="RemovePodSandbox for \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:20:25.742238 containerd[1509]: time="2025-02-13T19:20:25.742195186Z" level=info msg="Forcibly stopping sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\"" Feb 13 19:20:25.742318 containerd[1509]: time="2025-02-13T19:20:25.742274204Z" level=info msg="TearDown network for sandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" successfully" Feb 13 19:20:25.746264 containerd[1509]: time="2025-02-13T19:20:25.746235196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.746318 containerd[1509]: time="2025-02-13T19:20:25.746271584Z" level=info msg="RemovePodSandbox \"d1f5c9272922aca98453cfc37b49031869e9a116c6db5a2c5d07daad31831249\" returns successfully" Feb 13 19:20:25.746543 containerd[1509]: time="2025-02-13T19:20:25.746520311Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:20:25.746626 containerd[1509]: time="2025-02-13T19:20:25.746606042Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:20:25.746626 containerd[1509]: time="2025-02-13T19:20:25.746622613Z" level=info msg="StopPodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:20:25.746956 containerd[1509]: time="2025-02-13T19:20:25.746914761Z" level=info msg="RemovePodSandbox for \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:20:25.746956 containerd[1509]: time="2025-02-13T19:20:25.746962050Z" level=info msg="Forcibly stopping sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\"" Feb 13 19:20:25.747137 containerd[1509]: time="2025-02-13T19:20:25.747033313Z" level=info msg="TearDown network for sandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" successfully" Feb 13 19:20:25.750733 containerd[1509]: time="2025-02-13T19:20:25.750692719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.750824 containerd[1509]: time="2025-02-13T19:20:25.750744907Z" level=info msg="RemovePodSandbox \"579ce3d749ed10138b37d502914ad53cff43a3ebfe65c40d55985560efdc778f\" returns successfully" Feb 13 19:20:25.751064 containerd[1509]: time="2025-02-13T19:20:25.751036665Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:20:25.751127 containerd[1509]: time="2025-02-13T19:20:25.751112327Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:20:25.751155 containerd[1509]: time="2025-02-13T19:20:25.751123288Z" level=info msg="StopPodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:20:25.751347 containerd[1509]: time="2025-02-13T19:20:25.751329033Z" level=info msg="RemovePodSandbox for \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:20:25.751384 containerd[1509]: time="2025-02-13T19:20:25.751346857Z" level=info msg="Forcibly stopping sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\"" Feb 13 19:20:25.751447 containerd[1509]: time="2025-02-13T19:20:25.751416357Z" level=info msg="TearDown network for sandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" successfully" Feb 13 19:20:25.755055 containerd[1509]: time="2025-02-13T19:20:25.755014728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.755055 containerd[1509]: time="2025-02-13T19:20:25.755049784Z" level=info msg="RemovePodSandbox \"ecd73bb8b12d8fc535bc1932a0bac8e8d8a79894946935ba136b6235a56e101a\" returns successfully" Feb 13 19:20:25.755326 containerd[1509]: time="2025-02-13T19:20:25.755303269Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:20:25.755388 containerd[1509]: time="2025-02-13T19:20:25.755380564Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:20:25.755413 containerd[1509]: time="2025-02-13T19:20:25.755389842Z" level=info msg="StopPodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:20:25.755681 containerd[1509]: time="2025-02-13T19:20:25.755662264Z" level=info msg="RemovePodSandbox for \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:20:25.755742 containerd[1509]: time="2025-02-13T19:20:25.755680819Z" level=info msg="Forcibly stopping sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\"" Feb 13 19:20:25.755766 containerd[1509]: time="2025-02-13T19:20:25.755739038Z" level=info msg="TearDown network for sandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" successfully" Feb 13 19:20:25.759386 containerd[1509]: time="2025-02-13T19:20:25.759353359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.759386 containerd[1509]: time="2025-02-13T19:20:25.759404044Z" level=info msg="RemovePodSandbox \"aaee04e10fec030814b864d50268e9d8d2d2fc4441a350f70ec0bf8da83bf60f\" returns successfully" Feb 13 19:20:25.759755 containerd[1509]: time="2025-02-13T19:20:25.759716690Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:20:25.759993 containerd[1509]: time="2025-02-13T19:20:25.759845011Z" level=info msg="TearDown network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" successfully" Feb 13 19:20:25.759993 containerd[1509]: time="2025-02-13T19:20:25.759870779Z" level=info msg="StopPodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" returns successfully" Feb 13 19:20:25.760248 containerd[1509]: time="2025-02-13T19:20:25.760221838Z" level=info msg="RemovePodSandbox for \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:20:25.760286 containerd[1509]: time="2025-02-13T19:20:25.760251403Z" level=info msg="Forcibly stopping sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\"" Feb 13 19:20:25.760376 containerd[1509]: time="2025-02-13T19:20:25.760337826Z" level=info msg="TearDown network for sandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" successfully" Feb 13 19:20:25.764665 containerd[1509]: time="2025-02-13T19:20:25.764629758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.764730 containerd[1509]: time="2025-02-13T19:20:25.764681916Z" level=info msg="RemovePodSandbox \"22612d126cbb993b3b15255825334e79feb8a18378bf08bf1fe8544fec6e140f\" returns successfully" Feb 13 19:20:25.765014 containerd[1509]: time="2025-02-13T19:20:25.764979274Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" Feb 13 19:20:25.765112 containerd[1509]: time="2025-02-13T19:20:25.765076847Z" level=info msg="TearDown network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" successfully" Feb 13 19:20:25.765112 containerd[1509]: time="2025-02-13T19:20:25.765099680Z" level=info msg="StopPodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" returns successfully" Feb 13 19:20:25.765352 containerd[1509]: time="2025-02-13T19:20:25.765327467Z" level=info msg="RemovePodSandbox for \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" Feb 13 19:20:25.765401 containerd[1509]: time="2025-02-13T19:20:25.765350711Z" level=info msg="Forcibly stopping sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\"" Feb 13 19:20:25.765460 containerd[1509]: time="2025-02-13T19:20:25.765422926Z" level=info msg="TearDown network for sandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" successfully" Feb 13 19:20:25.769482 containerd[1509]: time="2025-02-13T19:20:25.769390841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.769482 containerd[1509]: time="2025-02-13T19:20:25.769432028Z" level=info msg="RemovePodSandbox \"d75718d8650c406741f66f52d690c1b6879131cbb9819a594718ce303ac72c0d\" returns successfully" Feb 13 19:20:25.769842 containerd[1509]: time="2025-02-13T19:20:25.769806912Z" level=info msg="StopPodSandbox for \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\"" Feb 13 19:20:25.769989 containerd[1509]: time="2025-02-13T19:20:25.769907571Z" level=info msg="TearDown network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" successfully" Feb 13 19:20:25.769989 containerd[1509]: time="2025-02-13T19:20:25.769925795Z" level=info msg="StopPodSandbox for \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" returns successfully" Feb 13 19:20:25.770262 containerd[1509]: time="2025-02-13T19:20:25.770189259Z" level=info msg="RemovePodSandbox for \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\"" Feb 13 19:20:25.770262 containerd[1509]: time="2025-02-13T19:20:25.770209116Z" level=info msg="Forcibly stopping sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\"" Feb 13 19:20:25.770474 containerd[1509]: time="2025-02-13T19:20:25.770281993Z" level=info msg="TearDown network for sandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" successfully" Feb 13 19:20:25.774411 containerd[1509]: time="2025-02-13T19:20:25.774376305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.774470 containerd[1509]: time="2025-02-13T19:20:25.774417212Z" level=info msg="RemovePodSandbox \"3a286ba4b19773d3a8f0b268b8f547915cdebb2039ff90081ebeeb0093339669\" returns successfully" Feb 13 19:20:25.774820 containerd[1509]: time="2025-02-13T19:20:25.774776075Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:20:25.774919 containerd[1509]: time="2025-02-13T19:20:25.774895178Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:20:25.774919 containerd[1509]: time="2025-02-13T19:20:25.774914074Z" level=info msg="StopPodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:20:25.775260 containerd[1509]: time="2025-02-13T19:20:25.775229946Z" level=info msg="RemovePodSandbox for \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:20:25.775308 containerd[1509]: time="2025-02-13T19:20:25.775263679Z" level=info msg="Forcibly stopping sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\"" Feb 13 19:20:25.775386 containerd[1509]: time="2025-02-13T19:20:25.775346456Z" level=info msg="TearDown network for sandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" successfully" Feb 13 19:20:25.779651 containerd[1509]: time="2025-02-13T19:20:25.779599575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.779651 containerd[1509]: time="2025-02-13T19:20:25.779644609Z" level=info msg="RemovePodSandbox \"d85522c687d55461ae43a6d293391b3e27cfc28cf857b711cb7aace5ae9cbdbd\" returns successfully" Feb 13 19:20:25.780007 containerd[1509]: time="2025-02-13T19:20:25.779980520Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:20:25.780124 containerd[1509]: time="2025-02-13T19:20:25.780104192Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:20:25.780174 containerd[1509]: time="2025-02-13T19:20:25.780121124Z" level=info msg="StopPodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:20:25.780966 containerd[1509]: time="2025-02-13T19:20:25.780391571Z" level=info msg="RemovePodSandbox for \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:20:25.780966 containerd[1509]: time="2025-02-13T19:20:25.780418281Z" level=info msg="Forcibly stopping sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\"" Feb 13 19:20:25.780966 containerd[1509]: time="2025-02-13T19:20:25.780500465Z" level=info msg="TearDown network for sandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" successfully" Feb 13 19:20:25.784608 containerd[1509]: time="2025-02-13T19:20:25.784575540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.784677 containerd[1509]: time="2025-02-13T19:20:25.784612170Z" level=info msg="RemovePodSandbox \"072ef559e96d8254b206455bebd2457c2401b8fd50f2d927cb6df58a2c5b3bdd\" returns successfully" Feb 13 19:20:25.784908 containerd[1509]: time="2025-02-13T19:20:25.784869032Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:20:25.785007 containerd[1509]: time="2025-02-13T19:20:25.784980060Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:20:25.785007 containerd[1509]: time="2025-02-13T19:20:25.784993956Z" level=info msg="StopPodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:20:25.785219 containerd[1509]: time="2025-02-13T19:20:25.785197959Z" level=info msg="RemovePodSandbox for \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:20:25.785219 containerd[1509]: time="2025-02-13T19:20:25.785216904Z" level=info msg="Forcibly stopping sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\"" Feb 13 19:20:25.785315 containerd[1509]: time="2025-02-13T19:20:25.785286104Z" level=info msg="TearDown network for sandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" successfully" Feb 13 19:20:25.790192 containerd[1509]: time="2025-02-13T19:20:25.790167112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.790258 containerd[1509]: time="2025-02-13T19:20:25.790203319Z" level=info msg="RemovePodSandbox \"6858f5890596795fb1fbe8a05bbac5ef521e0330a21436035ad2c18a3f4edb8b\" returns successfully" Feb 13 19:20:25.790507 containerd[1509]: time="2025-02-13T19:20:25.790480490Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:20:25.790596 containerd[1509]: time="2025-02-13T19:20:25.790576480Z" level=info msg="TearDown network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" successfully" Feb 13 19:20:25.790596 containerd[1509]: time="2025-02-13T19:20:25.790593923Z" level=info msg="StopPodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" returns successfully" Feb 13 19:20:25.790846 containerd[1509]: time="2025-02-13T19:20:25.790814326Z" level=info msg="RemovePodSandbox for \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:20:25.790846 containerd[1509]: time="2025-02-13T19:20:25.790837440Z" level=info msg="Forcibly stopping sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\"" Feb 13 19:20:25.790966 containerd[1509]: time="2025-02-13T19:20:25.790911819Z" level=info msg="TearDown network for sandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" successfully" Feb 13 19:20:25.797511 containerd[1509]: time="2025-02-13T19:20:25.797455226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.797511 containerd[1509]: time="2025-02-13T19:20:25.797496003Z" level=info msg="RemovePodSandbox \"9d8f11c444438848797a159c92a0195844f02d79d8ec6953dcdb04262aa866db\" returns successfully" Feb 13 19:20:25.797809 containerd[1509]: time="2025-02-13T19:20:25.797771520Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" Feb 13 19:20:25.797889 containerd[1509]: time="2025-02-13T19:20:25.797869173Z" level=info msg="TearDown network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" successfully" Feb 13 19:20:25.797889 containerd[1509]: time="2025-02-13T19:20:25.797885684Z" level=info msg="StopPodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" returns successfully" Feb 13 19:20:25.798163 containerd[1509]: time="2025-02-13T19:20:25.798139119Z" level=info msg="RemovePodSandbox for \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" Feb 13 19:20:25.798196 containerd[1509]: time="2025-02-13T19:20:25.798160719Z" level=info msg="Forcibly stopping sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\"" Feb 13 19:20:25.798264 containerd[1509]: time="2025-02-13T19:20:25.798227956Z" level=info msg="TearDown network for sandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" successfully" Feb 13 19:20:25.804145 containerd[1509]: time="2025-02-13T19:20:25.804079043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.804145 containerd[1509]: time="2025-02-13T19:20:25.804146540Z" level=info msg="RemovePodSandbox \"b88cada7262301933f5646b0c03aeadb9f23a9b782a457eb5baa4ea48b76c092\" returns successfully" Feb 13 19:20:25.804457 containerd[1509]: time="2025-02-13T19:20:25.804405587Z" level=info msg="StopPodSandbox for \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\"" Feb 13 19:20:25.804538 containerd[1509]: time="2025-02-13T19:20:25.804496768Z" level=info msg="TearDown network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" successfully" Feb 13 19:20:25.804538 containerd[1509]: time="2025-02-13T19:20:25.804506296Z" level=info msg="StopPodSandbox for \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" returns successfully" Feb 13 19:20:25.804858 containerd[1509]: time="2025-02-13T19:20:25.804828049Z" level=info msg="RemovePodSandbox for \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\"" Feb 13 19:20:25.804858 containerd[1509]: time="2025-02-13T19:20:25.804856462Z" level=info msg="Forcibly stopping sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\"" Feb 13 19:20:25.805005 containerd[1509]: time="2025-02-13T19:20:25.804961660Z" level=info msg="TearDown network for sandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" successfully" Feb 13 19:20:25.808547 containerd[1509]: time="2025-02-13T19:20:25.808511239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:20:25.808628 containerd[1509]: time="2025-02-13T19:20:25.808561083Z" level=info msg="RemovePodSandbox \"1f7a0a3e18db56fa859781e933b03c72a4b9a0f1bda50e03f1b9f88f548ae3cf\" returns successfully" Feb 13 19:20:26.758029 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:53756.service - OpenSSH per-connection server daemon (10.0.0.1:53756). Feb 13 19:20:26.798986 sshd[6116]: Accepted publickey for core from 10.0.0.1 port 53756 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:26.800510 sshd-session[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:26.804795 systemd-logind[1499]: New session 16 of user core. Feb 13 19:20:26.815068 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:20:26.935052 sshd[6118]: Connection closed by 10.0.0.1 port 53756 Feb 13 19:20:26.935515 sshd-session[6116]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:26.941040 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:53756.service: Deactivated successfully. Feb 13 19:20:26.943871 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:20:26.944907 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:20:26.946380 systemd-logind[1499]: Removed session 16. Feb 13 19:20:31.949613 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:53758.service - OpenSSH per-connection server daemon (10.0.0.1:53758). Feb 13 19:20:31.994558 sshd[6132]: Accepted publickey for core from 10.0.0.1 port 53758 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:31.996349 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:32.001451 systemd-logind[1499]: New session 17 of user core. Feb 13 19:20:32.013122 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:20:32.138584 sshd[6134]: Connection closed by 10.0.0.1 port 53758 Feb 13 19:20:32.139025 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:32.153896 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:53758.service: Deactivated successfully. Feb 13 19:20:32.156150 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:20:32.157757 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:20:32.163346 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Feb 13 19:20:32.164450 systemd-logind[1499]: Removed session 17. Feb 13 19:20:32.203518 sshd[6146]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:32.205143 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:32.209891 systemd-logind[1499]: New session 18 of user core. Feb 13 19:20:32.222180 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:20:32.691125 sshd[6149]: Connection closed by 10.0.0.1 port 53764 Feb 13 19:20:32.691657 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:32.702801 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:53764.service: Deactivated successfully. Feb 13 19:20:32.705004 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:20:32.706587 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:20:32.714220 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:53774.service - OpenSSH per-connection server daemon (10.0.0.1:53774). Feb 13 19:20:32.715317 systemd-logind[1499]: Removed session 18. Feb 13 19:20:32.762193 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 53774 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:32.764298 sshd-session[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:32.773474 systemd-logind[1499]: New session 19 of user core. Feb 13 19:20:32.780347 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:20:33.664821 sshd[6162]: Connection closed by 10.0.0.1 port 53774 Feb 13 19:20:33.668069 sshd-session[6159]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:33.680349 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:53774.service: Deactivated successfully. Feb 13 19:20:33.682649 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:20:33.683655 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:20:33.688008 systemd-logind[1499]: Removed session 19. Feb 13 19:20:33.693246 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Feb 13 19:20:33.736468 sshd[6181]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:33.737981 sshd-session[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:33.742571 systemd-logind[1499]: New session 20 of user core. Feb 13 19:20:33.755058 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:20:33.962233 sshd[6184]: Connection closed by 10.0.0.1 port 53784 Feb 13 19:20:33.962979 sshd-session[6181]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:33.972670 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:53784.service: Deactivated successfully. Feb 13 19:20:33.975344 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:20:33.976342 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:20:33.987440 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:53792.service - OpenSSH per-connection server daemon (10.0.0.1:53792). Feb 13 19:20:33.988337 systemd-logind[1499]: Removed session 20. Feb 13 19:20:34.024948 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 53792 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:34.026600 sshd-session[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:34.031242 systemd-logind[1499]: New session 21 of user core. Feb 13 19:20:34.036061 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:20:34.148652 sshd[6197]: Connection closed by 10.0.0.1 port 53792 Feb 13 19:20:34.149038 sshd-session[6194]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:34.152839 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:53792.service: Deactivated successfully. Feb 13 19:20:34.155030 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:20:34.155809 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:20:34.156634 systemd-logind[1499]: Removed session 21. Feb 13 19:20:34.935926 kubelet[2625]: E0213 19:20:34.935892 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:38.262835 kubelet[2625]: E0213 19:20:38.262799 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:39.161850 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:36634.service - OpenSSH per-connection server daemon (10.0.0.1:36634). Feb 13 19:20:39.204029 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 36634 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:39.205517 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:39.209772 systemd-logind[1499]: New session 22 of user core. Feb 13 19:20:39.220073 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:20:39.332663 sshd[6240]: Connection closed by 10.0.0.1 port 36634 Feb 13 19:20:39.333070 sshd-session[6238]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:39.338176 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:36634.service: Deactivated successfully. Feb 13 19:20:39.340525 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:20:39.341494 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:20:39.342528 systemd-logind[1499]: Removed session 22. Feb 13 19:20:44.262440 kubelet[2625]: E0213 19:20:44.262402 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:44.347086 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:36640.service - OpenSSH per-connection server daemon (10.0.0.1:36640). Feb 13 19:20:44.389281 sshd[6253]: Accepted publickey for core from 10.0.0.1 port 36640 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:44.390887 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:44.395245 systemd-logind[1499]: New session 23 of user core. Feb 13 19:20:44.417063 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:20:44.528594 sshd[6255]: Connection closed by 10.0.0.1 port 36640 Feb 13 19:20:44.529015 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:44.533134 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:36640.service: Deactivated successfully. Feb 13 19:20:44.535036 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:20:44.535667 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:20:44.536446 systemd-logind[1499]: Removed session 23. Feb 13 19:20:48.262571 kubelet[2625]: E0213 19:20:48.262530 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:20:49.541763 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:36188.service - OpenSSH per-connection server daemon (10.0.0.1:36188). Feb 13 19:20:49.585754 sshd[6277]: Accepted publickey for core from 10.0.0.1 port 36188 ssh2: RSA SHA256:xgLbxCKtIvCmXzj7C6d4ih050Hrbkh61XCRduaX62E8 Feb 13 19:20:49.587063 sshd-session[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:49.591052 systemd-logind[1499]: New session 24 of user core. Feb 13 19:20:49.600074 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:20:49.707584 sshd[6279]: Connection closed by 10.0.0.1 port 36188 Feb 13 19:20:49.707963 sshd-session[6277]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:49.711291 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:36188.service: Deactivated successfully. Feb 13 19:20:49.713175 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:20:49.713793 systemd-logind[1499]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:20:49.714604 systemd-logind[1499]: Removed session 24.