Feb 13 19:32:42.886607 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:32:42.886693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:42.886714 kernel: BIOS-provided physical RAM map: Feb 13 19:32:42.886721 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:32:42.886727 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:32:42.886733 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:32:42.886741 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:32:42.886748 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:32:42.886754 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:32:42.886761 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:32:42.886768 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:32:42.886777 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:32:42.886784 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:32:42.886790 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:32:42.886798 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:32:42.886805 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:32:42.886815 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:32:42.886828 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:32:42.886835 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:32:42.886842 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:32:42.886849 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:32:42.886856 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:32:42.886863 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:32:42.886871 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:32:42.886878 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:32:42.886885 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:32:42.886892 kernel: NX (Execute Disable) protection: active Feb 13 19:32:42.886901 kernel: APIC: Static calls initialized Feb 13 19:32:42.886908 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:32:42.886916 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:32:42.886923 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:32:42.886929 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:32:42.886938 kernel: extended physical RAM map: Feb 13 19:32:42.886947 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:32:42.886956 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:32:42.886963 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:32:42.886970 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:32:42.886977 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:32:42.886983 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:32:42.886994 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:32:42.887004 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:32:42.887011 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:32:42.887019 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:32:42.887026 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:32:42.887033 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:32:42.887043 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:32:42.887050 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:32:42.887057 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:32:42.887064 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:32:42.887072 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:32:42.887079 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:32:42.887086 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:32:42.887094 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:32:42.887101 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:32:42.887111 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:32:42.887118 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:32:42.887125 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:32:42.887133 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:32:42.887140 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:32:42.887147 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:32:42.887154 kernel: efi: EFI v2.7 by EDK II Feb 13 19:32:42.887162 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:32:42.887169 kernel: random: crng init done Feb 13 19:32:42.887176 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:32:42.887184 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:32:42.887191 kernel: secureboot: Secure boot disabled Feb 13 19:32:42.887200 kernel: SMBIOS 2.8 present. Feb 13 19:32:42.887208 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:32:42.887215 kernel: Hypervisor detected: KVM Feb 13 19:32:42.887222 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:32:42.887229 kernel: kvm-clock: using sched offset of 2788319700 cycles Feb 13 19:32:42.887237 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:32:42.887245 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:32:42.887252 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:32:42.887260 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:32:42.887267 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:32:42.887279 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:32:42.887289 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:32:42.887297 kernel: Using GB pages for direct mapping Feb 13 19:32:42.887304 kernel: ACPI: Early table checksum verification disabled Feb 13 19:32:42.887312 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:32:42.887320 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:32:42.887327 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887335 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887342 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:32:42.887352 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887360 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887367 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887375 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:32:42.887382 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:32:42.887390 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:32:42.887397 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:32:42.887404 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:32:42.887412 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:32:42.887422 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:32:42.887429 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:32:42.887436 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:32:42.887444 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:32:42.887451 kernel: No NUMA configuration found Feb 13 19:32:42.887458 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:32:42.887466 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:32:42.887473 kernel: Zone ranges: Feb 13 19:32:42.887481 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:32:42.887490 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:32:42.887498 kernel: Normal empty Feb 13 19:32:42.887505 kernel: Movable zone start for each node Feb 13 19:32:42.887513 kernel: Early memory node ranges Feb 13 19:32:42.887520 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:32:42.887527 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:32:42.887535 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:32:42.887542 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:32:42.887550 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:32:42.887559 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:32:42.887567 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:32:42.887577 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:32:42.887586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:32:42.887593 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:32:42.887601 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:32:42.887616 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:32:42.887626 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:32:42.887646 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:32:42.887654 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:32:42.887662 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:32:42.887669 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:32:42.887677 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:32:42.887687 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:32:42.887695 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:32:42.887703 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:32:42.887711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:32:42.887718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:32:42.887728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:32:42.887736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:32:42.887744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:32:42.887751 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:32:42.887759 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:32:42.887774 kernel: TSC deadline timer available Feb 13 19:32:42.887795 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:32:42.887812 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:32:42.887831 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:32:42.887855 kernel: kvm-guest: setup PV sched yield Feb 13 19:32:42.887874 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:32:42.887897 kernel: Booting paravirtualized kernel on KVM Feb 13 19:32:42.887918 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:32:42.887930 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:32:42.887938 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:32:42.887945 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:32:42.887953 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:32:42.887960 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:32:42.887971 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:32:42.887980 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:42.887988 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:32:42.887996 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:32:42.888003 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:32:42.888011 kernel: Fallback order for Node 0: 0 Feb 13 19:32:42.888019 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:32:42.888026 kernel: Policy zone: DMA32 Feb 13 19:32:42.888036 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:32:42.888044 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Feb 13 19:32:42.888052 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:32:42.888060 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:32:42.888067 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:32:42.888075 kernel: Dynamic Preempt: voluntary Feb 13 19:32:42.888083 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:32:42.888091 kernel: rcu: RCU event tracing is enabled. Feb 13 19:32:42.888099 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:32:42.888109 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:32:42.888117 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:32:42.888124 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:32:42.888132 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:32:42.888149 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:32:42.888157 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:32:42.888166 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:32:42.888184 kernel: Console: colour dummy device 80x25 Feb 13 19:32:42.888193 kernel: printk: console [ttyS0] enabled Feb 13 19:32:42.888203 kernel: ACPI: Core revision 20230628 Feb 13 19:32:42.888211 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:32:42.888219 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:32:42.888226 kernel: x2apic enabled Feb 13 19:32:42.888234 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:32:42.888245 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:32:42.888255 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:32:42.888265 kernel: kvm-guest: setup PV IPIs Feb 13 19:32:42.888274 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:32:42.888284 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:32:42.888292 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:32:42.888300 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:32:42.888308 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:32:42.888315 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:32:42.888323 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:32:42.888331 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:32:42.888339 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:32:42.888347 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:32:42.888358 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:32:42.888367 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:32:42.888376 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:32:42.888385 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:32:42.888393 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:32:42.888401 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:32:42.888409 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:32:42.888417 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:32:42.888427 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:32:42.888434 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:32:42.888442 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:32:42.888450 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:32:42.888460 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:32:42.888470 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:32:42.888478 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:32:42.888486 kernel: landlock: Up and running. Feb 13 19:32:42.888494 kernel: SELinux: Initializing. Feb 13 19:32:42.888504 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:42.888512 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:32:42.888520 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:32:42.888527 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:42.888535 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:42.888543 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:32:42.888551 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:32:42.888558 kernel: ... version: 0 Feb 13 19:32:42.888566 kernel: ... bit width: 48 Feb 13 19:32:42.888576 kernel: ... generic registers: 6 Feb 13 19:32:42.888584 kernel: ... value mask: 0000ffffffffffff Feb 13 19:32:42.888591 kernel: ... max period: 00007fffffffffff Feb 13 19:32:42.888599 kernel: ... fixed-purpose events: 0 Feb 13 19:32:42.888606 kernel: ... event mask: 000000000000003f Feb 13 19:32:42.888614 kernel: signal: max sigframe size: 1776 Feb 13 19:32:42.888622 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:32:42.888642 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:32:42.888673 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:32:42.888684 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:32:42.888692 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:32:42.888699 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:32:42.888707 kernel: smpboot: Max logical packages: 1 Feb 13 19:32:42.888715 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:32:42.888722 kernel: devtmpfs: initialized Feb 13 19:32:42.888730 kernel: x86/mm: Memory block size: 128MB Feb 13 19:32:42.888738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:32:42.888745 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:32:42.888756 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:32:42.888767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:32:42.888777 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:32:42.888785 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:32:42.888793 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:32:42.888801 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:32:42.888808 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:32:42.888816 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:32:42.888831 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:32:42.888842 kernel: audit: type=2000 audit(1739475162.403:1): state=initialized audit_enabled=0 res=1 Feb 13 19:32:42.888850 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:32:42.888857 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:32:42.888865 kernel: cpuidle: using governor menu Feb 13 19:32:42.888873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:32:42.888881 kernel: dca service started, version 1.12.1 Feb 13 19:32:42.888889 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:32:42.888896 kernel: PCI: Using configuration type 1 for base access Feb 13 19:32:42.888904 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:32:42.888914 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:32:42.888922 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:32:42.888929 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:32:42.888937 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:32:42.888945 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:32:42.888953 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:32:42.888964 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:32:42.888975 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:32:42.888982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:32:42.888993 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:32:42.889000 kernel: ACPI: Interpreter enabled Feb 13 19:32:42.889008 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:32:42.889016 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:32:42.889023 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:32:42.889031 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:32:42.889039 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:32:42.889046 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:32:42.889234 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:32:42.889409 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:32:42.889551 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:32:42.889564 kernel: PCI host bridge to bus 0000:00 Feb 13 19:32:42.889733 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:32:42.889863 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:32:42.889978 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:32:42.890094 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:32:42.890206 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:32:42.890346 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:32:42.890465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:32:42.890608 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:32:42.890758 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:32:42.890901 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:32:42.891024 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:32:42.891150 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:32:42.891419 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:32:42.891597 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:32:42.891746 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:32:42.891882 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:32:42.892011 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:32:42.892133 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:32:42.892265 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:32:42.892393 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:32:42.892516 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:32:42.892652 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:32:42.892788 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:32:42.892930 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:32:42.893076 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:32:42.893206 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:32:42.893330 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:32:42.893458 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:32:42.893580 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:32:42.893728 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:32:42.893868 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:32:42.893991 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:32:42.894135 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:32:42.894257 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:32:42.894268 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:32:42.894276 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:32:42.894284 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:32:42.894296 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:32:42.894304 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:32:42.894312 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:32:42.894320 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:32:42.894327 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:32:42.894335 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:32:42.894343 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:32:42.894351 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:32:42.894359 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:32:42.894369 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:32:42.894377 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:32:42.894385 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:32:42.894393 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:32:42.894401 kernel: iommu: Default domain type: Translated Feb 13 19:32:42.894412 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:32:42.894423 kernel: efivars: Registered efivars operations Feb 13 19:32:42.894432 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:32:42.894440 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:32:42.894451 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:32:42.894458 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:32:42.894466 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:32:42.894474 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:32:42.894481 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:32:42.894489 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:32:42.894497 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:32:42.894505 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:32:42.894642 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:32:42.894777 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:32:42.894912 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:32:42.894922 kernel: vgaarb: loaded Feb 13 19:32:42.894930 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:32:42.894939 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:32:42.894946 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:32:42.894954 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:32:42.894962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:32:42.894974 kernel: pnp: PnP ACPI init Feb 13 19:32:42.895109 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:32:42.895122 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:32:42.895130 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:32:42.895138 kernel: NET: Registered PF_INET protocol family Feb 13 19:32:42.895166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:32:42.895177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:32:42.895185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:32:42.895196 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:32:42.895204 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:32:42.895212 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:32:42.895220 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:42.895228 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:32:42.895236 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:32:42.895269 kernel: NET: Registered PF_XDP protocol family Feb 13 19:32:42.895409 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:32:42.895535 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:32:42.895723 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:32:42.895850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:32:42.895965 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:32:42.896086 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:32:42.896200 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:32:42.896317 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:32:42.896329 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:32:42.896337 kernel: Initialise system trusted keyrings Feb 13 19:32:42.896350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:32:42.896358 kernel: Key type asymmetric registered Feb 13 19:32:42.896367 kernel: Asymmetric key parser 'x509' registered Feb 13 19:32:42.896375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:32:42.896383 kernel: io scheduler mq-deadline registered Feb 13 19:32:42.896391 kernel: io scheduler kyber registered Feb 13 19:32:42.896399 kernel: io scheduler bfq registered Feb 13 19:32:42.896407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:32:42.896416 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:32:42.896427 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:32:42.896437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:32:42.896445 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:32:42.896454 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:32:42.896462 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:32:42.896470 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:32:42.896480 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:32:42.896616 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:32:42.896644 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 19:32:42.896763 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:32:42.896889 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:32:42 UTC (1739475162) Feb 13 19:32:42.897005 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:32:42.897016 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:32:42.897028 kernel: efifb: probing for efifb Feb 13 19:32:42.897039 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:32:42.897047 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:32:42.897055 kernel: efifb: scrolling: redraw Feb 13 19:32:42.897063 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:32:42.897071 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:32:42.897080 kernel: fb0: EFI VGA frame buffer device Feb 13 19:32:42.897088 kernel: pstore: Using crash dump compression: deflate Feb 13 19:32:42.897096 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:32:42.897104 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:32:42.897115 kernel: Segment Routing with IPv6 Feb 13 19:32:42.897123 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:32:42.897132 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:32:42.897140 kernel: Key type dns_resolver registered Feb 13 19:32:42.897148 kernel: IPI shorthand broadcast: enabled Feb 13 19:32:42.897156 kernel: sched_clock: Marking stable (597002552, 153551476)->(807455276, -56901248) Feb 13 19:32:42.897164 kernel: registered taskstats version 1 Feb 13 19:32:42.897172 kernel: Loading compiled-in X.509 certificates Feb 13 19:32:42.897180 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:32:42.897193 kernel: Key type .fscrypt registered Feb 13 19:32:42.897203 kernel: Key type fscrypt-provisioning registered Feb 13 19:32:42.897214 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:32:42.897225 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:32:42.897236 kernel: ima: No architecture policies found Feb 13 19:32:42.897246 kernel: clk: Disabling unused clocks Feb 13 19:32:42.897256 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:32:42.897267 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:32:42.897281 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:32:42.897292 kernel: Run /init as init process Feb 13 19:32:42.897301 kernel: with arguments: Feb 13 19:32:42.897309 kernel: /init Feb 13 19:32:42.897317 kernel: with environment: Feb 13 19:32:42.897325 kernel: HOME=/ Feb 13 19:32:42.897333 kernel: TERM=linux Feb 13 19:32:42.897341 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:32:42.897351 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:32:42.897364 systemd[1]: Detected virtualization kvm. Feb 13 19:32:42.897373 systemd[1]: Detected architecture x86-64. Feb 13 19:32:42.897382 systemd[1]: Running in initrd. Feb 13 19:32:42.897391 systemd[1]: No hostname configured, using default hostname. Feb 13 19:32:42.897399 systemd[1]: Hostname set to . Feb 13 19:32:42.897408 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:32:42.897417 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:32:42.897426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:42.897437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:42.897446 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:32:42.897455 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:42.897464 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:32:42.897473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:32:42.897484 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:32:42.897495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:32:42.897504 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:42.897513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:42.897522 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:42.897530 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:42.897539 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:42.897548 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:42.897557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:42.897565 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:42.897577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:32:42.897586 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:32:42.897594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:42.897603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:42.897612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:42.897621 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:42.897642 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:32:42.897651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:42.897663 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:32:42.897672 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:32:42.897680 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:42.897689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:42.897698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:42.897707 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:42.897716 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:42.897724 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:32:42.897758 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:32:42.897781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:32:42.897790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:42.897799 systemd-journald[194]: Journal started Feb 13 19:32:42.897818 systemd-journald[194]: Runtime Journal (/run/log/journal/6b5c365c249e47ae93886df1a68d19f1) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:32:42.888205 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:32:42.901960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:42.903648 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:42.904795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:32:42.913020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:42.914887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:42.921896 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:32:42.924162 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:32:42.924713 kernel: Bridge firewalling registered Feb 13 19:32:42.925514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:42.927830 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:42.930370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:42.936772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:42.939531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:42.941703 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:32:42.943754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:42.959060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:42.968863 dracut-cmdline[229]: dracut-dracut-053 Feb 13 19:32:42.971893 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:32:42.993960 systemd-resolved[231]: Positive Trust Anchors: Feb 13 19:32:42.993981 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:42.994013 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:42.996476 systemd-resolved[231]: Defaulting to hostname 'linux'. Feb 13 19:32:42.997578 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:43.002508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:43.055679 kernel: SCSI subsystem initialized Feb 13 19:32:43.065669 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:32:43.075675 kernel: iscsi: registered transport (tcp) Feb 13 19:32:43.097081 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:32:43.097134 kernel: QLogic iSCSI HBA Driver Feb 13 19:32:43.145935 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:43.162851 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:32:43.205858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:32:43.205922 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:32:43.206988 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:32:43.248663 kernel: raid6: avx2x4 gen() 29326 MB/s Feb 13 19:32:43.265660 kernel: raid6: avx2x2 gen() 30440 MB/s Feb 13 19:32:43.282758 kernel: raid6: avx2x1 gen() 24373 MB/s Feb 13 19:32:43.282787 kernel: raid6: using algorithm avx2x2 gen() 30440 MB/s Feb 13 19:32:43.303827 kernel: raid6: .... xor() 16382 MB/s, rmw enabled Feb 13 19:32:43.303850 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:32:43.327663 kernel: xor: automatically using best checksumming function avx Feb 13 19:32:43.499673 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:32:43.514159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:43.532824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:43.548483 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 19:32:43.553713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:43.569858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:32:43.585067 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 13 19:32:43.619528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:43.634778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:43.705096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:43.715851 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:32:43.730777 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:43.735664 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:43.738856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:43.741876 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:43.750648 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:32:43.751657 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:32:43.778540 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:32:43.778781 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:32:43.778798 kernel: AES CTR mode by8 optimization enabled Feb 13 19:32:43.778822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:32:43.778837 kernel: GPT:9289727 != 19775487 Feb 13 19:32:43.778852 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:32:43.778867 kernel: GPT:9289727 != 19775487 Feb 13 19:32:43.778881 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:32:43.778900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:43.757142 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:32:43.772933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:43.773147 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:43.775413 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:43.790591 kernel: libata version 3.00 loaded. Feb 13 19:32:43.776882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:43.777032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:43.779707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:43.803363 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:32:43.827310 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:32:43.827333 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:32:43.827539 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:32:43.827747 kernel: scsi host0: ahci Feb 13 19:32:43.827963 kernel: scsi host1: ahci Feb 13 19:32:43.828150 kernel: scsi host2: ahci Feb 13 19:32:43.828341 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) Feb 13 19:32:43.828358 kernel: scsi host3: ahci Feb 13 19:32:43.828548 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) Feb 13 19:32:43.828571 kernel: scsi host4: ahci Feb 13 19:32:43.828882 kernel: scsi host5: ahci Feb 13 19:32:43.829080 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:32:43.829097 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:32:43.829111 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:32:43.829126 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:32:43.829140 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:32:43.829160 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:32:43.794884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:43.800539 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:43.821520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:43.845233 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:32:43.851383 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:32:43.857410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:32:43.863135 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:32:43.864405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:32:43.876774 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:32:43.877939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:43.877994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:43.880482 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:43.882600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:43.888183 disk-uuid[555]: Primary Header is updated. Feb 13 19:32:43.888183 disk-uuid[555]: Secondary Entries is updated. Feb 13 19:32:43.888183 disk-uuid[555]: Secondary Header is updated. Feb 13 19:32:43.891759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:43.896665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:43.904011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:43.914818 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:32:43.934344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:44.139437 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:32:44.139528 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:44.139541 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:44.140656 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:44.141668 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:44.142673 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:32:44.142749 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:32:44.158777 kernel: ata3.00: applying bridge limits Feb 13 19:32:44.158925 kernel: ata3.00: configured for UDMA/100 Feb 13 19:32:44.161673 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:32:44.217173 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:32:44.229461 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:32:44.229483 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:32:44.899667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:32:44.899730 disk-uuid[558]: The operation has completed successfully. Feb 13 19:32:44.931961 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:32:44.932088 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:32:44.956826 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:32:44.961146 sh[597]: Success Feb 13 19:32:44.974653 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:32:45.010182 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:32:45.021846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:32:45.024534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:32:45.036218 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:32:45.036247 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:45.036258 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:32:45.037268 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:32:45.038684 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:32:45.042882 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:32:45.044549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:32:45.058893 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:32:45.062151 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:32:45.070644 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:45.070676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:45.070687 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:45.073665 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:45.082899 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:32:45.084791 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:45.094326 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:32:45.101846 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:32:45.239108 ignition[673]: Ignition 2.20.0 Feb 13 19:32:45.239133 ignition[673]: Stage: fetch-offline Feb 13 19:32:45.239242 ignition[673]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:45.239254 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:45.239358 ignition[673]: parsed url from cmdline: "" Feb 13 19:32:45.239362 ignition[673]: no config URL provided Feb 13 19:32:45.239368 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:32:45.239377 ignition[673]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:32:45.239429 ignition[673]: op(1): [started] loading QEMU firmware config module Feb 13 19:32:45.239434 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:32:45.266748 ignition[673]: op(1): [finished] loading QEMU firmware config module Feb 13 19:32:45.290732 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:45.299783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:45.312086 ignition[673]: parsing config with SHA512: 87f4e58b27c7a5b09ecb2a63fa8156162eb8e41482434a2b118c9c2d1a46e6ab595750eb603a9237f8ef63ec06e8d8776dcfb809b03f2fce1337b7e924787bd8 Feb 13 19:32:45.323385 unknown[673]: fetched base config from "system" Feb 13 19:32:45.323404 unknown[673]: fetched user config from "qemu" Feb 13 19:32:45.324048 systemd-networkd[786]: lo: Link UP Feb 13 19:32:45.324770 ignition[673]: fetch-offline: fetch-offline passed Feb 13 19:32:45.324054 systemd-networkd[786]: lo: Gained carrier Feb 13 19:32:45.324856 ignition[673]: Ignition finished successfully Feb 13 19:32:45.326012 systemd-networkd[786]: Enumeration completed Feb 13 19:32:45.326328 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:45.326619 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:45.326624 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:45.328145 systemd[1]: Reached target network.target - Network. Feb 13 19:32:45.328174 systemd-networkd[786]: eth0: Link UP Feb 13 19:32:45.328180 systemd-networkd[786]: eth0: Gained carrier Feb 13 19:32:45.328188 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:45.342678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:45.342939 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:32:45.356977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:32:45.366693 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:32:45.373615 ignition[789]: Ignition 2.20.0 Feb 13 19:32:45.373628 ignition[789]: Stage: kargs Feb 13 19:32:45.373829 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:45.373840 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:45.375007 ignition[789]: kargs: kargs passed Feb 13 19:32:45.375053 ignition[789]: Ignition finished successfully Feb 13 19:32:45.379114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:32:45.392948 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:32:45.409446 ignition[798]: Ignition 2.20.0 Feb 13 19:32:45.409458 ignition[798]: Stage: disks Feb 13 19:32:45.409642 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:45.409654 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:45.410426 ignition[798]: disks: disks passed Feb 13 19:32:45.413162 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:32:45.410472 ignition[798]: Ignition finished successfully Feb 13 19:32:45.414443 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:45.416007 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:32:45.418192 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:45.419232 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:45.419286 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:45.429883 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:32:45.460862 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:32:45.467819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:32:45.476834 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:32:45.568655 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:32:45.569735 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:32:45.572084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:45.583720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:45.585002 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:32:45.586722 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:32:45.586785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:32:45.594260 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Feb 13 19:32:45.586817 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:45.597917 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:45.597932 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:45.597943 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:45.598171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:32:45.600886 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:32:45.603623 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:45.606101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:45.648045 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:32:45.653519 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:32:45.659177 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:32:45.663227 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:32:45.767307 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:45.778713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:32:45.779514 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:32:45.790651 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:45.809013 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:32:45.820262 ignition[932]: INFO : Ignition 2.20.0 Feb 13 19:32:45.820262 ignition[932]: INFO : Stage: mount Feb 13 19:32:45.822095 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:45.822095 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:45.822095 ignition[932]: INFO : mount: mount passed Feb 13 19:32:45.822095 ignition[932]: INFO : Ignition finished successfully Feb 13 19:32:45.827986 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:32:45.837772 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:32:46.035553 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:32:46.044934 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:32:46.055825 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 19:32:46.055856 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:32:46.055868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:32:46.057817 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:32:46.060671 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:32:46.061922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:32:46.090765 ignition[962]: INFO : Ignition 2.20.0 Feb 13 19:32:46.090765 ignition[962]: INFO : Stage: files Feb 13 19:32:46.092658 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:46.092658 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:46.095050 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:32:46.096938 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:32:46.096938 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:32:46.100892 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:32:46.102496 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:32:46.104383 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 19:32:46.105679 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:32:46.107987 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:32:46.109996 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:32:46.171420 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:32:46.394676 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:32:46.394676 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:32:46.398582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:32:46.475804 systemd-networkd[786]: eth0: Gained IPv6LL Feb 13 19:32:46.923993 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:32:47.220503 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:32:47.220503 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:32:47.224805 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:32:47.247368 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:32:47.252417 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:32:47.254182 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:32:47.254182 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:47.256982 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:32:47.258426 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:47.260208 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:32:47.261900 ignition[962]: INFO : files: files passed Feb 13 19:32:47.262670 ignition[962]: INFO : Ignition finished successfully Feb 13 19:32:47.265880 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:32:47.278784 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:32:47.280755 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:32:47.282558 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:32:47.282685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:32:47.290882 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:32:47.293775 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:47.293775 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:47.297088 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:32:47.296394 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:47.298703 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:32:47.308813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:32:47.336757 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:32:47.336897 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:32:47.339513 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:32:47.342004 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:32:47.343200 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:32:47.344104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:32:47.365213 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:47.374860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:32:47.384616 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:47.387040 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:47.389397 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:32:47.391244 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:32:47.392255 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:32:47.394951 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:32:47.397030 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:32:47.398894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:32:47.401075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:32:47.403406 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:32:47.405688 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:32:47.407784 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:32:47.410290 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:32:47.412375 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:32:47.414412 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:32:47.416070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:32:47.417107 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:32:47.419502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:47.421758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:47.424140 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:32:47.425121 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:47.427800 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:32:47.428815 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:32:47.431078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:32:47.432162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:32:47.434547 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:32:47.436359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:32:47.441724 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:47.441939 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:32:47.444537 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:32:47.446274 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:32:47.446405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:32:47.449069 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:32:47.449194 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:32:47.450063 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:32:47.450213 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:32:47.451956 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:32:47.452103 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:32:47.468817 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:32:47.470840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:32:47.470982 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:47.475399 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:32:47.477436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:32:47.478763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:47.481570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:32:47.482722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:32:47.483243 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:32:47.483243 ignition[1016]: INFO : Stage: umount Feb 13 19:32:47.484202 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:32:47.484202 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:32:47.484808 ignition[1016]: INFO : umount: umount passed Feb 13 19:32:47.484808 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:32:47.492047 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:32:47.493173 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:32:47.497262 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:32:47.498432 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:32:47.501501 systemd[1]: Stopped target network.target - Network. Feb 13 19:32:47.503512 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:32:47.504585 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:32:47.506654 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:32:47.506729 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:32:47.510134 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:32:47.511478 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:32:47.513944 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:32:47.514009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:32:47.517923 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:32:47.520504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:32:47.522673 systemd-networkd[786]: eth0: DHCPv6 lease lost Feb 13 19:32:47.524922 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:32:47.526423 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:32:47.527560 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:32:47.530583 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:32:47.531683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:32:47.536459 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:32:47.537589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:47.548749 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:32:47.550809 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:32:47.550880 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:32:47.554838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:32:47.554898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:47.557287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:32:47.558460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:47.561820 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:32:47.562929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:47.565951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:47.575605 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:32:47.576846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:32:47.581457 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:32:47.582734 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:47.585876 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:32:47.586997 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:47.589141 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:32:47.589195 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:47.592457 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:32:47.593507 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:32:47.595990 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:32:47.597053 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:32:47.599390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:32:47.600543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:32:47.615782 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:32:47.618191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:32:47.619398 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:47.622190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:47.622252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:47.625836 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:32:47.627041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:32:47.695389 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:32:47.696452 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:32:47.699025 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:32:47.701104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:32:47.702094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:32:47.720808 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:32:47.730469 systemd[1]: Switching root. Feb 13 19:32:47.767860 systemd-journald[194]: Journal stopped Feb 13 19:32:48.914717 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:32:48.914807 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:32:48.914823 kernel: SELinux: policy capability open_perms=1 Feb 13 19:32:48.914835 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:32:48.914857 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:32:48.914876 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:32:48.914888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:32:48.914900 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:32:48.914912 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:32:48.914923 kernel: audit: type=1403 audit(1739475168.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:32:48.914936 systemd[1]: Successfully loaded SELinux policy in 40.564ms. Feb 13 19:32:48.914967 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.741ms. Feb 13 19:32:48.914981 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:32:48.914993 systemd[1]: Detected virtualization kvm. Feb 13 19:32:48.915008 systemd[1]: Detected architecture x86-64. Feb 13 19:32:48.915022 systemd[1]: Detected first boot. Feb 13 19:32:48.915034 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:32:48.915047 zram_generator::config[1061]: No configuration found. Feb 13 19:32:48.915061 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:32:48.915074 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:32:48.915086 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:32:48.915098 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:32:48.915114 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:32:48.915126 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:32:48.915138 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:32:48.915150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:32:48.915162 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:32:48.915175 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:32:48.915188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:32:48.915200 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:32:48.915212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:32:48.915228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:32:48.915241 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:32:48.915252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:32:48.915265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:32:48.915277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:32:48.915295 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:32:48.915307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:32:48.915319 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:32:48.915332 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:32:48.915346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:32:48.915358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:32:48.915371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:32:48.915383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:32:48.915395 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:32:48.915407 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:32:48.915419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:32:48.915432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:32:48.915446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:32:48.915460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:32:48.915472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:32:48.915484 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:32:48.915507 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:32:48.915519 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:32:48.915531 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:32:48.915544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:48.915556 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:32:48.915571 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:32:48.915583 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:32:48.915596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:32:48.915608 systemd[1]: Reached target machines.target - Containers. Feb 13 19:32:48.915621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:32:48.915648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:48.915661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:32:48.915673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:32:48.915695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:48.915707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:48.915719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:48.915731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:32:48.915744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:48.915760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:32:48.915772 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:32:48.915784 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:32:48.915800 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:32:48.915815 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:32:48.915830 kernel: fuse: init (API version 7.39) Feb 13 19:32:48.915845 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:32:48.915860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:32:48.915875 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:32:48.915891 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:32:48.915906 kernel: ACPI: bus type drm_connector registered Feb 13 19:32:48.915928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:32:48.915943 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:32:48.915961 systemd[1]: Stopped verity-setup.service. Feb 13 19:32:48.915977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:48.916007 systemd-journald[1138]: Collecting audit messages is disabled. Feb 13 19:32:48.916028 kernel: loop: module loaded Feb 13 19:32:48.916041 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:32:48.916053 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:32:48.916065 systemd-journald[1138]: Journal started Feb 13 19:32:48.916090 systemd-journald[1138]: Runtime Journal (/run/log/journal/6b5c365c249e47ae93886df1a68d19f1) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:32:48.696267 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:32:48.713770 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:32:48.714234 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:32:48.920046 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:32:48.920886 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:32:48.922103 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:32:48.923374 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:32:48.924615 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:32:48.925891 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:32:48.927358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:32:48.928966 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:32:48.929149 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:32:48.930744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:48.930944 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:48.932558 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:48.932749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:48.934227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:48.934395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:48.936020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:32:48.936186 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:32:48.937914 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:48.938087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:48.939497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:32:48.940916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:32:48.942692 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:32:48.959912 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:32:48.970787 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:32:48.973424 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:32:48.974597 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:32:48.974646 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:32:48.976783 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:32:48.979198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:32:48.983209 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:32:48.984691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:48.988680 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:32:48.993738 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:32:48.995016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:48.996527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:32:48.998356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:49.000946 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:32:49.008019 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:32:49.012882 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:32:49.016256 systemd-journald[1138]: Time spent on flushing to /var/log/journal/6b5c365c249e47ae93886df1a68d19f1 is 19.452ms for 1043 entries. Feb 13 19:32:49.016256 systemd-journald[1138]: System Journal (/var/log/journal/6b5c365c249e47ae93886df1a68d19f1) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:32:49.120137 systemd-journald[1138]: Received client request to flush runtime journal. Feb 13 19:32:49.120183 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:32:49.120199 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:32:49.016146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:32:49.018543 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:32:49.019876 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:32:49.022018 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:32:49.088474 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:32:49.091822 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:32:49.103574 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:32:49.113212 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:32:49.125472 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:32:49.127205 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:32:49.133435 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:32:49.143812 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:32:49.144502 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:32:49.146241 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:32:49.152654 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:32:49.158927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:32:49.188542 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:32:49.188562 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:32:49.224305 kernel: loop2: detected capacity change from 0 to 141000 Feb 13 19:32:49.222270 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:32:49.258661 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:32:49.273664 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:32:49.284651 kernel: loop5: detected capacity change from 0 to 141000 Feb 13 19:32:49.291651 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:32:49.292349 (sd-merge)[1200]: Merged extensions into '/usr'. Feb 13 19:32:49.368914 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:32:49.369139 systemd[1]: Reloading... Feb 13 19:32:49.507922 zram_generator::config[1226]: No configuration found. Feb 13 19:32:49.544306 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:32:49.655019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:49.705315 systemd[1]: Reloading finished in 335 ms. Feb 13 19:32:49.739456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:32:49.741452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:32:49.750081 systemd[1]: Starting ensure-sysext.service... Feb 13 19:32:49.752060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:32:49.766102 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:32:49.766131 systemd[1]: Reloading... Feb 13 19:32:49.814174 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:32:49.814478 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:32:49.815540 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:32:49.816026 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Feb 13 19:32:49.816119 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Feb 13 19:32:49.824337 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:49.824349 systemd-tmpfiles[1264]: Skipping /boot Feb 13 19:32:49.872669 zram_generator::config[1290]: No configuration found. Feb 13 19:32:49.881847 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:32:49.881960 systemd-tmpfiles[1264]: Skipping /boot Feb 13 19:32:50.017741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:32:50.076544 systemd[1]: Reloading finished in 309 ms. Feb 13 19:32:50.099244 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:32:50.114445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:32:50.124342 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:50.127038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:32:50.129873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:32:50.135214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:32:50.139061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:32:50.142131 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:32:50.146243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.146468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:50.150532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:50.153949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:50.156714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:50.158271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:50.161848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:32:50.162944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.166359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:50.167079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:50.170131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:50.170333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:50.173281 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:50.173481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:50.180482 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:32:50.183337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:32:50.184992 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 19:32:50.189993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.190211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:50.195197 augenrules[1364]: No rules Feb 13 19:32:50.196958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:50.199480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:50.203806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:50.205070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:50.210025 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:32:50.211305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.212734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:32:50.215378 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:50.215688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:50.217354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:50.217532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:50.219681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:50.219883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:50.223101 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:50.223271 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:50.224953 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:32:50.228437 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:32:50.238147 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:32:50.262038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.270828 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:32:50.272068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:32:50.275665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:32:50.278772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:32:50.282979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:32:50.290804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:32:50.292887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:32:50.301394 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:32:50.302711 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:32:50.302739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:32:50.303400 systemd[1]: Finished ensure-sysext.service. Feb 13 19:32:50.306080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:32:50.306253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:32:50.307882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:32:50.308070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:32:50.311134 augenrules[1402]: /sbin/augenrules: No change Feb 13 19:32:50.321779 augenrules[1429]: No rules Feb 13 19:32:50.314005 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:32:50.314174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:32:50.318165 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:32:50.318340 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:32:50.323117 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:32:50.324431 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:32:50.345711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1380) Feb 13 19:32:50.346996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:32:50.348728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:32:50.348794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:32:50.355865 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:32:50.361033 systemd-resolved[1332]: Positive Trust Anchors: Feb 13 19:32:50.361049 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:32:50.361080 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:32:50.364895 systemd-resolved[1332]: Defaulting to hostname 'linux'. Feb 13 19:32:50.368827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:32:50.370120 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:32:50.375669 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:32:50.381665 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:32:50.384220 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:32:50.387786 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:32:50.388016 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:32:50.388209 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:32:50.390390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:32:50.399847 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:32:50.406660 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:32:50.406889 systemd-networkd[1417]: lo: Link UP Feb 13 19:32:50.407215 systemd-networkd[1417]: lo: Gained carrier Feb 13 19:32:50.408914 systemd-networkd[1417]: Enumeration completed Feb 13 19:32:50.409093 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:32:50.410275 systemd[1]: Reached target network.target - Network. Feb 13 19:32:50.410430 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:50.410485 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:32:50.412607 systemd-networkd[1417]: eth0: Link UP Feb 13 19:32:50.412615 systemd-networkd[1417]: eth0: Gained carrier Feb 13 19:32:50.412674 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:32:50.422924 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:32:50.424414 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:32:50.495076 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:32:50.517998 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:32:51.089376 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:32:51.089410 systemd-resolved[1332]: Clock change detected. Flushing caches. Feb 13 19:32:51.089468 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:32:51.089523 systemd-timesyncd[1439]: Initial clock synchronization to Thu 2025-02-13 19:32:51.089347 UTC. Feb 13 19:32:51.095105 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:32:51.114535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:51.150967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:32:51.152621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:51.177742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:32:51.182396 kernel: kvm_amd: TSC scaling supported Feb 13 19:32:51.182499 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:32:51.182526 kernel: kvm_amd: Nested Paging enabled Feb 13 19:32:51.183773 kernel: kvm_amd: LBR virtualization supported Feb 13 19:32:51.183808 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:32:51.184366 kernel: kvm_amd: Virtual GIF supported Feb 13 19:32:51.204281 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:32:51.235857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:32:51.253685 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:32:51.268336 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:32:51.308309 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:51.337623 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:32:51.351112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:32:51.352451 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:32:51.353729 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:32:51.355069 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:32:51.356655 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:32:51.357896 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:32:51.359404 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:32:51.360674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:32:51.360708 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:32:51.361650 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:32:51.377686 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:32:51.380535 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:32:51.388858 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:32:51.391571 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:32:51.393225 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:32:51.394437 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:32:51.395426 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:32:51.396422 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:51.396449 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:32:51.397519 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:32:51.399930 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:32:51.403222 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:32:51.404659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:32:51.415929 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:32:51.417038 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:32:51.418989 jq[1472]: false Feb 13 19:32:51.419434 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:32:51.422364 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:32:51.425604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:32:51.430242 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:32:51.436045 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:32:51.438223 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:32:51.438760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:32:51.440133 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:32:51.442335 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:32:51.448248 extend-filesystems[1473]: Found loop3 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found loop4 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found loop5 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found sr0 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda1 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda2 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda3 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found usr Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda4 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda6 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda7 Feb 13 19:32:51.448248 extend-filesystems[1473]: Found vda9 Feb 13 19:32:51.448248 extend-filesystems[1473]: Checking size of /dev/vda9 Feb 13 19:32:51.447333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:32:51.457430 dbus-daemon[1471]: [system] SELinux support is enabled Feb 13 19:32:51.458431 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:32:51.464299 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:32:51.464516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:32:51.464835 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:32:51.465023 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:32:51.467809 jq[1484]: true Feb 13 19:32:51.472347 update_engine[1483]: I20250213 19:32:51.472240 1483 main.cc:92] Flatcar Update Engine starting Feb 13 19:32:51.473672 update_engine[1483]: I20250213 19:32:51.473632 1483 update_check_scheduler.cc:74] Next update check in 8m3s Feb 13 19:32:51.478816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:32:51.480306 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:32:51.497210 jq[1494]: true Feb 13 19:32:51.498899 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:32:51.509279 extend-filesystems[1473]: Resized partition /dev/vda9 Feb 13 19:32:51.510684 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:32:51.513749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:32:51.513882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:32:51.515389 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:32:51.515409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:32:51.518279 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:32:51.519536 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:32:51.526362 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:32:51.529222 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1391) Feb 13 19:32:51.528974 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:32:51.532159 tar[1493]: linux-amd64/LICENSE Feb 13 19:32:51.534510 tar[1493]: linux-amd64/helm Feb 13 19:32:51.537047 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:32:51.577094 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:32:51.586009 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:32:51.586050 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:32:51.588827 systemd-logind[1480]: New seat seat0. Feb 13 19:32:51.590459 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:32:51.593812 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:59478.service - OpenSSH per-connection server daemon (10.0.0.1:59478). Feb 13 19:32:51.598743 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:32:51.628837 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:32:51.601835 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:32:51.604136 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:32:51.604860 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:32:51.610746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:32:51.631654 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:32:51.643059 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:32:51.648953 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:32:51.648953 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:32:51.648953 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:32:51.643334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:32:51.658558 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Feb 13 19:32:51.645633 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:32:51.655625 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:32:51.664350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:32:51.699365 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:32:51.702418 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:32:51.705400 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:32:51.728880 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 59478 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:32:51.731685 sshd-session[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:51.741906 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:32:51.749844 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:32:51.754037 systemd-logind[1480]: New session 1 of user core. Feb 13 19:32:51.800135 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:32:51.810479 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:32:51.823590 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:32:51.908240 containerd[1495]: time="2025-02-13T19:32:51.908026989Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:32:51.934822 containerd[1495]: time="2025-02-13T19:32:51.934773288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.936991 containerd[1495]: time="2025-02-13T19:32:51.936930253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:51.936991 containerd[1495]: time="2025-02-13T19:32:51.936993531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:32:51.937057 containerd[1495]: time="2025-02-13T19:32:51.937021895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:32:51.937259 containerd[1495]: time="2025-02-13T19:32:51.937239783Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:32:51.937298 containerd[1495]: time="2025-02-13T19:32:51.937260572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937364 containerd[1495]: time="2025-02-13T19:32:51.937334651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937364 containerd[1495]: time="2025-02-13T19:32:51.937355861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937593 containerd[1495]: time="2025-02-13T19:32:51.937573128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937593 containerd[1495]: time="2025-02-13T19:32:51.937591593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937637 containerd[1495]: time="2025-02-13T19:32:51.937606180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937637 containerd[1495]: time="2025-02-13T19:32:51.937616179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937728 containerd[1495]: time="2025-02-13T19:32:51.937710476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.937973 containerd[1495]: time="2025-02-13T19:32:51.937953732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:32:51.938100 containerd[1495]: time="2025-02-13T19:32:51.938072745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:32:51.938100 containerd[1495]: time="2025-02-13T19:32:51.938088976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:32:51.938241 containerd[1495]: time="2025-02-13T19:32:51.938223288Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:32:51.938322 containerd[1495]: time="2025-02-13T19:32:51.938282809Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:32:51.945078 containerd[1495]: time="2025-02-13T19:32:51.945047055Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:32:51.945148 containerd[1495]: time="2025-02-13T19:32:51.945114080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:32:51.945148 containerd[1495]: time="2025-02-13T19:32:51.945139478Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:32:51.946644 containerd[1495]: time="2025-02-13T19:32:51.946216949Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:32:51.946644 containerd[1495]: time="2025-02-13T19:32:51.946263927Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:32:51.946644 containerd[1495]: time="2025-02-13T19:32:51.946550524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:32:51.957942 containerd[1495]: time="2025-02-13T19:32:51.957658486Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:32:51.958023 containerd[1495]: time="2025-02-13T19:32:51.957983465Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:32:51.958023 containerd[1495]: time="2025-02-13T19:32:51.958000517Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:32:51.958023 containerd[1495]: time="2025-02-13T19:32:51.958017719Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:32:51.958100 containerd[1495]: time="2025-02-13T19:32:51.958033659Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958100 containerd[1495]: time="2025-02-13T19:32:51.958048076Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958100 containerd[1495]: time="2025-02-13T19:32:51.958059948Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958100 containerd[1495]: time="2025-02-13T19:32:51.958079766Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958100 containerd[1495]: time="2025-02-13T19:32:51.958094904Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958107608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958126042Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958137885Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958208778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958231500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958263 containerd[1495]: time="2025-02-13T19:32:51.958259052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958274080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958287305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958300770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958331638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958367385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958397722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958423741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958435473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958447135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958458 containerd[1495]: time="2025-02-13T19:32:51.958459367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958743 containerd[1495]: time="2025-02-13T19:32:51.958477982Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:32:51.958743 containerd[1495]: time="2025-02-13T19:32:51.958498601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958743 containerd[1495]: time="2025-02-13T19:32:51.958526173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.958743 containerd[1495]: time="2025-02-13T19:32:51.958553975Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:32:51.959867 containerd[1495]: time="2025-02-13T19:32:51.959824668Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:32:51.959899 containerd[1495]: time="2025-02-13T19:32:51.959873690Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:32:51.959899 containerd[1495]: time="2025-02-13T19:32:51.959886885Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:32:51.959998 containerd[1495]: time="2025-02-13T19:32:51.959899408Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:32:51.959998 containerd[1495]: time="2025-02-13T19:32:51.959920958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.959998 containerd[1495]: time="2025-02-13T19:32:51.959942228Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:32:51.959998 containerd[1495]: time="2025-02-13T19:32:51.959965131Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:32:51.959998 containerd[1495]: time="2025-02-13T19:32:51.959989848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:32:51.960559 containerd[1495]: time="2025-02-13T19:32:51.960498431Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:32:51.960559 containerd[1495]: time="2025-02-13T19:32:51.960557462Z" level=info msg="Connect containerd service" Feb 13 19:32:51.960803 containerd[1495]: time="2025-02-13T19:32:51.960592698Z" level=info msg="using legacy CRI server" Feb 13 19:32:51.960803 containerd[1495]: time="2025-02-13T19:32:51.960601815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:32:51.960803 containerd[1495]: time="2025-02-13T19:32:51.960729906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:32:51.961528 containerd[1495]: time="2025-02-13T19:32:51.961497425Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:32:51.961887 containerd[1495]: time="2025-02-13T19:32:51.961708150Z" level=info msg="Start subscribing containerd event" Feb 13 19:32:51.961887 containerd[1495]: time="2025-02-13T19:32:51.961816714Z" level=info msg="Start recovering state" Feb 13 19:32:51.961969 containerd[1495]: time="2025-02-13T19:32:51.961940997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:32:51.962138 containerd[1495]: time="2025-02-13T19:32:51.961956676Z" level=info msg="Start event monitor" Feb 13 19:32:51.962398 containerd[1495]: time="2025-02-13T19:32:51.962131795Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:32:51.962478 containerd[1495]: time="2025-02-13T19:32:51.962462785Z" level=info msg="Start snapshots syncer" Feb 13 19:32:51.962851 containerd[1495]: time="2025-02-13T19:32:51.962531284Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:32:51.962851 containerd[1495]: time="2025-02-13T19:32:51.962546122Z" level=info msg="Start streaming server" Feb 13 19:32:51.962743 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:32:51.963223 containerd[1495]: time="2025-02-13T19:32:51.963204667Z" level=info msg="containerd successfully booted in 0.057634s" Feb 13 19:32:52.021610 systemd[1559]: Queued start job for default target default.target. Feb 13 19:32:52.035574 systemd[1559]: Created slice app.slice - User Application Slice. Feb 13 19:32:52.035603 systemd[1559]: Reached target paths.target - Paths. Feb 13 19:32:52.035617 systemd[1559]: Reached target timers.target - Timers. Feb 13 19:32:52.037364 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:32:52.053081 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:32:52.053220 systemd[1559]: Reached target sockets.target - Sockets. Feb 13 19:32:52.053242 systemd[1559]: Reached target basic.target - Basic System. Feb 13 19:32:52.053301 systemd[1559]: Reached target default.target - Main User Target. Feb 13 19:32:52.053345 systemd[1559]: Startup finished in 164ms. Feb 13 19:32:52.053641 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:32:52.062479 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:32:52.149553 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:59486.service - OpenSSH per-connection server daemon (10.0.0.1:59486). Feb 13 19:32:52.190719 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 59486 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:32:52.193184 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:52.198749 systemd-logind[1480]: New session 2 of user core. Feb 13 19:32:52.208422 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:32:52.229478 systemd-networkd[1417]: eth0: Gained IPv6LL Feb 13 19:32:52.263723 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:32:52.266049 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:32:52.274481 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:32:52.277265 tar[1493]: linux-amd64/README.md Feb 13 19:32:52.278178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:32:52.280920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:32:52.287604 sshd[1576]: Connection closed by 10.0.0.1 port 59486 Feb 13 19:32:52.289090 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:52.293412 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:59486.service: Deactivated successfully. Feb 13 19:32:52.298724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:32:52.300812 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:32:52.301625 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:32:52.308638 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:32:52.312527 systemd-logind[1480]: Removed session 2. Feb 13 19:32:52.314558 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:59502.service - OpenSSH per-connection server daemon (10.0.0.1:59502). Feb 13 19:32:52.321437 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:32:52.321698 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:32:52.323428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:32:52.357807 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 59502 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:32:52.359516 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:52.364150 systemd-logind[1480]: New session 3 of user core. Feb 13 19:32:52.371334 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:32:52.536106 sshd[1603]: Connection closed by 10.0.0.1 port 59502 Feb 13 19:32:52.536791 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:52.541120 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:59502.service: Deactivated successfully. Feb 13 19:32:52.543076 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:32:52.543776 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:32:52.544672 systemd-logind[1480]: Removed session 3. Feb 13 19:32:53.885277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:32:53.887008 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:32:53.889300 systemd[1]: Startup finished in 730ms (kernel) + 5.476s (initrd) + 5.178s (userspace) = 11.385s. Feb 13 19:32:53.890068 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:32:53.940822 agetty[1555]: failed to open credentials directory Feb 13 19:32:53.944790 agetty[1554]: failed to open credentials directory Feb 13 19:32:54.366374 kubelet[1612]: E0213 19:32:54.366242 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:32:54.370241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:32:54.370439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:32:54.370821 systemd[1]: kubelet.service: Consumed 1.951s CPU time. Feb 13 19:33:02.540605 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:49538.service - OpenSSH per-connection server daemon (10.0.0.1:49538). Feb 13 19:33:02.588424 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 49538 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:02.590128 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:02.594597 systemd-logind[1480]: New session 4 of user core. Feb 13 19:33:02.605426 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:33:02.661011 sshd[1627]: Connection closed by 10.0.0.1 port 49538 Feb 13 19:33:02.661400 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:02.681848 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:49538.service: Deactivated successfully. Feb 13 19:33:02.684424 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:33:02.686554 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:33:02.702729 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:49550.service - OpenSSH per-connection server daemon (10.0.0.1:49550). Feb 13 19:33:02.703876 systemd-logind[1480]: Removed session 4. Feb 13 19:33:02.742645 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 49550 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:02.744636 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:02.749735 systemd-logind[1480]: New session 5 of user core. Feb 13 19:33:02.760412 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:33:02.811770 sshd[1634]: Connection closed by 10.0.0.1 port 49550 Feb 13 19:33:02.812129 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:02.834315 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:49550.service: Deactivated successfully. Feb 13 19:33:02.836220 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:33:02.837985 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:33:02.853605 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:49552.service - OpenSSH per-connection server daemon (10.0.0.1:49552). Feb 13 19:33:02.854886 systemd-logind[1480]: Removed session 5. Feb 13 19:33:02.892180 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 49552 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:02.893771 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:02.898294 systemd-logind[1480]: New session 6 of user core. Feb 13 19:33:02.916445 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:33:02.971825 sshd[1641]: Connection closed by 10.0.0.1 port 49552 Feb 13 19:33:02.972327 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:02.983578 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:49552.service: Deactivated successfully. Feb 13 19:33:02.985476 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:33:02.987164 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:33:03.003580 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:49554.service - OpenSSH per-connection server daemon (10.0.0.1:49554). Feb 13 19:33:03.004643 systemd-logind[1480]: Removed session 6. Feb 13 19:33:03.041856 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 49554 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:03.043628 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:03.048032 systemd-logind[1480]: New session 7 of user core. Feb 13 19:33:03.058392 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:33:03.118989 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:33:03.119358 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:03.135824 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:03.137742 sshd[1648]: Connection closed by 10.0.0.1 port 49554 Feb 13 19:33:03.138254 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:03.151770 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:49554.service: Deactivated successfully. Feb 13 19:33:03.153542 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:33:03.155077 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:33:03.162435 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:49570.service - OpenSSH per-connection server daemon (10.0.0.1:49570). Feb 13 19:33:03.163330 systemd-logind[1480]: Removed session 7. Feb 13 19:33:03.201777 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 49570 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:03.203789 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:03.208695 systemd-logind[1480]: New session 8 of user core. Feb 13 19:33:03.218370 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:33:03.273053 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:33:03.273425 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:03.276760 sudo[1658]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:03.282362 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:33:03.282683 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:03.303599 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:03.335698 augenrules[1680]: No rules Feb 13 19:33:03.337860 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:03.338106 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:03.339309 sudo[1657]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:03.340781 sshd[1656]: Connection closed by 10.0.0.1 port 49570 Feb 13 19:33:03.341262 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:03.356064 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:49570.service: Deactivated successfully. Feb 13 19:33:03.357761 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:33:03.359087 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:33:03.360393 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:49572.service - OpenSSH per-connection server daemon (10.0.0.1:49572). Feb 13 19:33:03.361157 systemd-logind[1480]: Removed session 8. Feb 13 19:33:03.412634 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 49572 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:33:03.414304 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:03.419015 systemd-logind[1480]: New session 9 of user core. Feb 13 19:33:03.433380 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:33:03.488563 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:33:03.488886 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:03.947443 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:33:03.947638 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:33:04.534973 dockerd[1712]: time="2025-02-13T19:33:04.534843337Z" level=info msg="Starting up" Feb 13 19:33:04.536118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:33:04.548330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:04.832417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:04.838740 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:04.961039 kubelet[1744]: E0213 19:33:04.960980 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:04.968057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:04.968334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:05.091375 dockerd[1712]: time="2025-02-13T19:33:05.091232475Z" level=info msg="Loading containers: start." Feb 13 19:33:05.286225 kernel: Initializing XFRM netlink socket Feb 13 19:33:05.378790 systemd-networkd[1417]: docker0: Link UP Feb 13 19:33:05.419984 dockerd[1712]: time="2025-02-13T19:33:05.419920863Z" level=info msg="Loading containers: done." Feb 13 19:33:05.438144 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2897775833-merged.mount: Deactivated successfully. Feb 13 19:33:05.440781 dockerd[1712]: time="2025-02-13T19:33:05.440723205Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:33:05.440917 dockerd[1712]: time="2025-02-13T19:33:05.440885770Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:33:05.441116 dockerd[1712]: time="2025-02-13T19:33:05.441081567Z" level=info msg="Daemon has completed initialization" Feb 13 19:33:05.484762 dockerd[1712]: time="2025-02-13T19:33:05.484662757Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:33:05.485019 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:33:06.281535 containerd[1495]: time="2025-02-13T19:33:06.281476237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:33:07.469871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705824074.mount: Deactivated successfully. Feb 13 19:33:08.767618 containerd[1495]: time="2025-02-13T19:33:08.767551926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:08.768558 containerd[1495]: time="2025-02-13T19:33:08.768500725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:33:08.769970 containerd[1495]: time="2025-02-13T19:33:08.769922211Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:08.778142 containerd[1495]: time="2025-02-13T19:33:08.778027662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:08.780955 containerd[1495]: time="2025-02-13T19:33:08.780860585Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.499270875s" Feb 13 19:33:08.781107 containerd[1495]: time="2025-02-13T19:33:08.780998433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:33:08.783064 containerd[1495]: time="2025-02-13T19:33:08.783024843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:33:10.197071 containerd[1495]: time="2025-02-13T19:33:10.197004320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:10.197791 containerd[1495]: time="2025-02-13T19:33:10.197726664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:33:10.199095 containerd[1495]: time="2025-02-13T19:33:10.199062179Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:10.202173 containerd[1495]: time="2025-02-13T19:33:10.202103552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:10.207110 containerd[1495]: time="2025-02-13T19:33:10.206987101Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.423906874s" Feb 13 19:33:10.207110 containerd[1495]: time="2025-02-13T19:33:10.207037285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:33:10.207843 containerd[1495]: time="2025-02-13T19:33:10.207787822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:33:11.944117 containerd[1495]: time="2025-02-13T19:33:11.944035406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:11.944934 containerd[1495]: time="2025-02-13T19:33:11.944863419Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:33:11.946282 containerd[1495]: time="2025-02-13T19:33:11.946247435Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:11.949215 containerd[1495]: time="2025-02-13T19:33:11.949151631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:11.950398 containerd[1495]: time="2025-02-13T19:33:11.950352734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.742518494s" Feb 13 19:33:11.950398 containerd[1495]: time="2025-02-13T19:33:11.950385114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:33:11.950887 containerd[1495]: time="2025-02-13T19:33:11.950868341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:33:13.391905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660371707.mount: Deactivated successfully. Feb 13 19:33:14.444803 containerd[1495]: time="2025-02-13T19:33:14.444728098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:14.445930 containerd[1495]: time="2025-02-13T19:33:14.445871121Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:33:14.449856 containerd[1495]: time="2025-02-13T19:33:14.449815408Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:14.485729 containerd[1495]: time="2025-02-13T19:33:14.485656924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:14.486513 containerd[1495]: time="2025-02-13T19:33:14.486454480Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.535553678s" Feb 13 19:33:14.486586 containerd[1495]: time="2025-02-13T19:33:14.486514943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:33:14.487114 containerd[1495]: time="2025-02-13T19:33:14.487079111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:33:15.014525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:33:15.277545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:15.288472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303070583.mount: Deactivated successfully. Feb 13 19:33:15.482137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:15.488292 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:15.687572 kubelet[2015]: E0213 19:33:15.687400 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:15.692460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:15.692737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:18.463461 containerd[1495]: time="2025-02-13T19:33:18.463375376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.464834 containerd[1495]: time="2025-02-13T19:33:18.464736499Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:33:18.466474 containerd[1495]: time="2025-02-13T19:33:18.466400509Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.471227 containerd[1495]: time="2025-02-13T19:33:18.471171126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.472465 containerd[1495]: time="2025-02-13T19:33:18.472411312Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.985300982s" Feb 13 19:33:18.472465 containerd[1495]: time="2025-02-13T19:33:18.472458230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:33:18.473578 containerd[1495]: time="2025-02-13T19:33:18.473542804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:33:18.973229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015302532.mount: Deactivated successfully. Feb 13 19:33:18.980708 containerd[1495]: time="2025-02-13T19:33:18.980658436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.981444 containerd[1495]: time="2025-02-13T19:33:18.981376944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:33:18.982885 containerd[1495]: time="2025-02-13T19:33:18.982845588Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.985554 containerd[1495]: time="2025-02-13T19:33:18.985508652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:18.986288 containerd[1495]: time="2025-02-13T19:33:18.986244502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.662104ms" Feb 13 19:33:18.986288 containerd[1495]: time="2025-02-13T19:33:18.986283616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:33:18.986905 containerd[1495]: time="2025-02-13T19:33:18.986871678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:33:19.578935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494134657.mount: Deactivated successfully. Feb 13 19:33:24.926533 containerd[1495]: time="2025-02-13T19:33:24.926460896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:24.927498 containerd[1495]: time="2025-02-13T19:33:24.927442602Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:33:24.928821 containerd[1495]: time="2025-02-13T19:33:24.928790211Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:24.932147 containerd[1495]: time="2025-02-13T19:33:24.932097625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:24.933255 containerd[1495]: time="2025-02-13T19:33:24.933214610Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.94628921s" Feb 13 19:33:24.933297 containerd[1495]: time="2025-02-13T19:33:24.933257082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:33:25.777616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:33:25.787387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:25.946921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:25.951701 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:25.992041 kubelet[2157]: E0213 19:33:25.991965 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:25.995559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:25.995811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:27.137421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:27.148415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:27.173029 systemd[1]: Reloading requested from client PID 2173 ('systemctl') (unit session-9.scope)... Feb 13 19:33:27.173051 systemd[1]: Reloading... Feb 13 19:33:27.269244 zram_generator::config[2215]: No configuration found. Feb 13 19:33:28.458220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:28.541422 systemd[1]: Reloading finished in 1367 ms. Feb 13 19:33:28.595337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:28.598855 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:33:28.599098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:28.612484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:28.770615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:28.777436 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:33:28.841378 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:28.841378 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:33:28.841378 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:28.841824 kubelet[2262]: I0213 19:33:28.841468 2262 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:33:29.246725 kubelet[2262]: I0213 19:33:29.246671 2262 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:33:29.246725 kubelet[2262]: I0213 19:33:29.246715 2262 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:33:29.247095 kubelet[2262]: I0213 19:33:29.247070 2262 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:33:29.289628 kubelet[2262]: E0213 19:33:29.289573 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:29.291298 kubelet[2262]: I0213 19:33:29.291260 2262 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:33:29.328630 kubelet[2262]: E0213 19:33:29.328581 2262 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:33:29.328630 kubelet[2262]: I0213 19:33:29.328621 2262 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:33:29.334013 kubelet[2262]: I0213 19:33:29.333967 2262 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:33:29.349752 kubelet[2262]: I0213 19:33:29.349639 2262 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:33:29.350001 kubelet[2262]: I0213 19:33:29.349746 2262 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:33:29.350175 kubelet[2262]: I0213 19:33:29.350006 2262 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:33:29.350175 kubelet[2262]: I0213 19:33:29.350024 2262 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:33:29.350295 kubelet[2262]: I0213 19:33:29.350267 2262 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:29.374325 kubelet[2262]: I0213 19:33:29.374275 2262 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:33:29.374325 kubelet[2262]: I0213 19:33:29.374329 2262 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:33:29.374594 kubelet[2262]: I0213 19:33:29.374365 2262 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:33:29.374594 kubelet[2262]: I0213 19:33:29.374381 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:33:29.387174 kubelet[2262]: W0213 19:33:29.387017 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:29.387174 kubelet[2262]: W0213 19:33:29.387038 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:29.387174 kubelet[2262]: E0213 19:33:29.387097 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:29.387174 kubelet[2262]: E0213 19:33:29.387115 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:29.387775 kubelet[2262]: I0213 19:33:29.387612 2262 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:33:29.388560 kubelet[2262]: I0213 19:33:29.388320 2262 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:33:29.413500 kubelet[2262]: W0213 19:33:29.413433 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:33:29.434647 kubelet[2262]: I0213 19:33:29.434594 2262 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:33:29.434647 kubelet[2262]: I0213 19:33:29.434648 2262 server.go:1287] "Started kubelet" Feb 13 19:33:29.444737 kubelet[2262]: I0213 19:33:29.444696 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:33:29.444737 kubelet[2262]: I0213 19:33:29.444696 2262 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:33:29.444898 kubelet[2262]: I0213 19:33:29.444699 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:33:29.445147 kubelet[2262]: I0213 19:33:29.445119 2262 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:33:29.445697 kubelet[2262]: I0213 19:33:29.445676 2262 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:33:29.446469 kubelet[2262]: I0213 19:33:29.446448 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:33:29.446890 kubelet[2262]: E0213 19:33:29.446869 2262 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:33:29.447120 kubelet[2262]: E0213 19:33:29.447091 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:29.447120 kubelet[2262]: I0213 19:33:29.447118 2262 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:33:29.447305 kubelet[2262]: I0213 19:33:29.447290 2262 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:33:29.447380 kubelet[2262]: I0213 19:33:29.447368 2262 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:33:29.447763 kubelet[2262]: W0213 19:33:29.447726 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:29.447801 kubelet[2262]: E0213 19:33:29.447768 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:29.448251 kubelet[2262]: I0213 19:33:29.448232 2262 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:33:29.448326 kubelet[2262]: I0213 19:33:29.448305 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:33:29.448926 kubelet[2262]: E0213 19:33:29.448619 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Feb 13 19:33:29.449155 kubelet[2262]: I0213 19:33:29.449132 2262 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:33:29.463490 kubelet[2262]: I0213 19:33:29.463286 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:33:29.464888 kubelet[2262]: I0213 19:33:29.464854 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:33:29.464888 kubelet[2262]: I0213 19:33:29.464885 2262 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:33:29.464984 kubelet[2262]: I0213 19:33:29.464912 2262 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:33:29.464984 kubelet[2262]: I0213 19:33:29.464922 2262 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:33:29.464984 kubelet[2262]: E0213 19:33:29.464971 2262 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:33:29.470766 kubelet[2262]: W0213 19:33:29.470712 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:29.470865 kubelet[2262]: E0213 19:33:29.470765 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:29.474285 kubelet[2262]: I0213 19:33:29.474254 2262 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:33:29.474285 kubelet[2262]: I0213 19:33:29.474286 2262 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:33:29.474358 kubelet[2262]: I0213 19:33:29.474314 2262 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:29.493537 kubelet[2262]: E0213 19:33:29.475726 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db7ca1f2e0bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:33:29.434620091 +0000 UTC m=+0.634615273,LastTimestamp:2025-02-13 19:33:29.434620091 +0000 UTC m=+0.634615273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:33:29.547953 kubelet[2262]: E0213 19:33:29.547905 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:29.565256 kubelet[2262]: E0213 19:33:29.565221 2262 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:29.648596 kubelet[2262]: E0213 19:33:29.648552 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:29.650185 kubelet[2262]: E0213 19:33:29.650139 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Feb 13 19:33:29.749462 kubelet[2262]: E0213 19:33:29.749393 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:29.765778 kubelet[2262]: E0213 19:33:29.765695 2262 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:29.850004 kubelet[2262]: E0213 19:33:29.849887 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:29.950898 kubelet[2262]: E0213 19:33:29.950838 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.050955 kubelet[2262]: E0213 19:33:30.050888 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Feb 13 19:33:30.050955 kubelet[2262]: E0213 19:33:30.050923 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.151811 kubelet[2262]: E0213 19:33:30.151648 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.166940 kubelet[2262]: E0213 19:33:30.166840 2262 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:30.252527 kubelet[2262]: E0213 19:33:30.252420 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.353434 kubelet[2262]: E0213 19:33:30.353339 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.454041 kubelet[2262]: E0213 19:33:30.453874 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.516890 kubelet[2262]: W0213 19:33:30.516830 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:30.516890 kubelet[2262]: E0213 19:33:30.516883 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:30.518551 kubelet[2262]: W0213 19:33:30.518494 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:30.518606 kubelet[2262]: E0213 19:33:30.518556 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:30.554178 kubelet[2262]: E0213 19:33:30.554145 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.603217 kubelet[2262]: W0213 19:33:30.603121 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:30.603217 kubelet[2262]: E0213 19:33:30.603183 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:30.625565 kubelet[2262]: I0213 19:33:30.625469 2262 policy_none.go:49] "None policy: Start" Feb 13 19:33:30.625565 kubelet[2262]: I0213 19:33:30.625550 2262 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:33:30.625565 kubelet[2262]: I0213 19:33:30.625574 2262 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:33:30.654969 kubelet[2262]: E0213 19:33:30.654915 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.755735 kubelet[2262]: E0213 19:33:30.755568 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.782865 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:33:30.801497 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:33:30.851911 kubelet[2262]: E0213 19:33:30.851854 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Feb 13 19:33:30.855901 kubelet[2262]: E0213 19:33:30.855856 2262 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:30.879956 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:33:30.886642 kubelet[2262]: W0213 19:33:30.886594 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:30.886763 kubelet[2262]: E0213 19:33:30.886648 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:30.889498 kubelet[2262]: I0213 19:33:30.889460 2262 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:33:30.889824 kubelet[2262]: I0213 19:33:30.889744 2262 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:33:30.889824 kubelet[2262]: I0213 19:33:30.889772 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:33:30.890749 kubelet[2262]: I0213 19:33:30.890040 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:33:30.891220 kubelet[2262]: E0213 19:33:30.891170 2262 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:33:30.891364 kubelet[2262]: E0213 19:33:30.891236 2262 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:33:30.975783 systemd[1]: Created slice kubepods-burstable-podb0f4ea3951b904127cc6f707dbf92ec2.slice - libcontainer container kubepods-burstable-podb0f4ea3951b904127cc6f707dbf92ec2.slice. Feb 13 19:33:30.991883 kubelet[2262]: I0213 19:33:30.991847 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:30.992271 kubelet[2262]: E0213 19:33:30.992225 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:33:30.993120 kubelet[2262]: E0213 19:33:30.993089 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:30.996136 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:33:31.003927 kubelet[2262]: E0213 19:33:31.003887 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:31.006050 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:33:31.007869 kubelet[2262]: E0213 19:33:31.007841 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:31.057365 kubelet[2262]: I0213 19:33:31.057295 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:31.057365 kubelet[2262]: I0213 19:33:31.057346 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:31.057593 kubelet[2262]: I0213 19:33:31.057392 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:31.057593 kubelet[2262]: I0213 19:33:31.057426 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:31.057593 kubelet[2262]: I0213 19:33:31.057467 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:31.057593 kubelet[2262]: I0213 19:33:31.057532 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:31.057593 kubelet[2262]: I0213 19:33:31.057570 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:31.057733 kubelet[2262]: I0213 19:33:31.057608 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:31.057733 kubelet[2262]: I0213 19:33:31.057629 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:31.194401 kubelet[2262]: I0213 19:33:31.194369 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:31.194831 kubelet[2262]: E0213 19:33:31.194793 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:33:31.293940 kubelet[2262]: E0213 19:33:31.293908 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:31.294735 containerd[1495]: time="2025-02-13T19:33:31.294692961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b0f4ea3951b904127cc6f707dbf92ec2,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:31.304860 kubelet[2262]: E0213 19:33:31.304835 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:31.305330 containerd[1495]: time="2025-02-13T19:33:31.305282968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:31.311572 kubelet[2262]: E0213 19:33:31.311528 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:31.311964 containerd[1495]: time="2025-02-13T19:33:31.311922471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:31.437046 kubelet[2262]: E0213 19:33:31.436968 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:31.596955 kubelet[2262]: I0213 19:33:31.596825 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:31.597270 kubelet[2262]: E0213 19:33:31.597232 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:33:31.741382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374478531.mount: Deactivated successfully. Feb 13 19:33:31.749718 containerd[1495]: time="2025-02-13T19:33:31.749644976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:31.752627 containerd[1495]: time="2025-02-13T19:33:31.752533176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:33:31.753836 containerd[1495]: time="2025-02-13T19:33:31.753793385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:31.756068 containerd[1495]: time="2025-02-13T19:33:31.756035344Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:31.756690 containerd[1495]: time="2025-02-13T19:33:31.756644495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:33:31.757779 containerd[1495]: time="2025-02-13T19:33:31.757737276Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:31.758686 containerd[1495]: time="2025-02-13T19:33:31.758661355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:33:31.760256 containerd[1495]: time="2025-02-13T19:33:31.760177854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:31.761023 containerd[1495]: time="2025-02-13T19:33:31.760998055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.166601ms" Feb 13 19:33:31.765146 containerd[1495]: time="2025-02-13T19:33:31.765093845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.090811ms" Feb 13 19:33:31.765533 containerd[1495]: time="2025-02-13T19:33:31.765489920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.093245ms" Feb 13 19:33:32.029268 containerd[1495]: time="2025-02-13T19:33:32.028829228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:32.029268 containerd[1495]: time="2025-02-13T19:33:32.026842929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:32.029419 containerd[1495]: time="2025-02-13T19:33:32.029311766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:32.029419 containerd[1495]: time="2025-02-13T19:33:32.029369826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.029548 containerd[1495]: time="2025-02-13T19:33:32.029498952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.029772 containerd[1495]: time="2025-02-13T19:33:32.029716986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:32.029832 containerd[1495]: time="2025-02-13T19:33:32.029775889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.029988 containerd[1495]: time="2025-02-13T19:33:32.029955220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.032779 containerd[1495]: time="2025-02-13T19:33:32.032687708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:32.032942 containerd[1495]: time="2025-02-13T19:33:32.032875045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:32.032942 containerd[1495]: time="2025-02-13T19:33:32.032894673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.033204 containerd[1495]: time="2025-02-13T19:33:32.033107768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:32.092500 systemd[1]: Started cri-containerd-97884de2233187fd5b238030bf192a5bdba33ba8057d596eeba4693d1dcd2e19.scope - libcontainer container 97884de2233187fd5b238030bf192a5bdba33ba8057d596eeba4693d1dcd2e19. Feb 13 19:33:32.098108 systemd[1]: Started cri-containerd-db8ca3a68cfd55f7cbf4a260ca60a79cab45ecd1eab115b718488df0201d762a.scope - libcontainer container db8ca3a68cfd55f7cbf4a260ca60a79cab45ecd1eab115b718488df0201d762a. Feb 13 19:33:32.102581 systemd[1]: Started cri-containerd-7da1b0557af26b2ea05661c23ca7bdd958826cfd9d663e45a2c809922c54b82c.scope - libcontainer container 7da1b0557af26b2ea05661c23ca7bdd958826cfd9d663e45a2c809922c54b82c. Feb 13 19:33:32.217213 containerd[1495]: time="2025-02-13T19:33:32.217149404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"db8ca3a68cfd55f7cbf4a260ca60a79cab45ecd1eab115b718488df0201d762a\"" Feb 13 19:33:32.218919 kubelet[2262]: E0213 19:33:32.218893 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.222110 containerd[1495]: time="2025-02-13T19:33:32.222072590Z" level=info msg="CreateContainer within sandbox \"db8ca3a68cfd55f7cbf4a260ca60a79cab45ecd1eab115b718488df0201d762a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:33:32.223098 containerd[1495]: time="2025-02-13T19:33:32.223043497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b0f4ea3951b904127cc6f707dbf92ec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"97884de2233187fd5b238030bf192a5bdba33ba8057d596eeba4693d1dcd2e19\"" Feb 13 19:33:32.223766 kubelet[2262]: E0213 19:33:32.223736 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.225809 containerd[1495]: time="2025-02-13T19:33:32.225705993Z" level=info msg="CreateContainer within sandbox \"97884de2233187fd5b238030bf192a5bdba33ba8057d596eeba4693d1dcd2e19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:33:32.237146 containerd[1495]: time="2025-02-13T19:33:32.237076745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"7da1b0557af26b2ea05661c23ca7bdd958826cfd9d663e45a2c809922c54b82c\"" Feb 13 19:33:32.237820 kubelet[2262]: E0213 19:33:32.237791 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.240037 containerd[1495]: time="2025-02-13T19:33:32.239997222Z" level=info msg="CreateContainer within sandbox \"7da1b0557af26b2ea05661c23ca7bdd958826cfd9d663e45a2c809922c54b82c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:33:32.242139 containerd[1495]: time="2025-02-13T19:33:32.242071799Z" level=info msg="CreateContainer within sandbox \"db8ca3a68cfd55f7cbf4a260ca60a79cab45ecd1eab115b718488df0201d762a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b23693f744642ba0106a5857b9fa3b26f8f7e171d44e7464f1f2f1c5e4c451b2\"" Feb 13 19:33:32.242794 containerd[1495]: time="2025-02-13T19:33:32.242756782Z" level=info msg="StartContainer for \"b23693f744642ba0106a5857b9fa3b26f8f7e171d44e7464f1f2f1c5e4c451b2\"" Feb 13 19:33:32.254206 containerd[1495]: time="2025-02-13T19:33:32.254140200Z" level=info msg="CreateContainer within sandbox \"97884de2233187fd5b238030bf192a5bdba33ba8057d596eeba4693d1dcd2e19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d89baf860e63f0e5633baafd453935f96ce52c5166bc0f6bb1e74c68ae56c89\"" Feb 13 19:33:32.254831 containerd[1495]: time="2025-02-13T19:33:32.254767784Z" level=info msg="StartContainer for \"2d89baf860e63f0e5633baafd453935f96ce52c5166bc0f6bb1e74c68ae56c89\"" Feb 13 19:33:32.274678 containerd[1495]: time="2025-02-13T19:33:32.274531955Z" level=info msg="CreateContainer within sandbox \"7da1b0557af26b2ea05661c23ca7bdd958826cfd9d663e45a2c809922c54b82c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d32e3ba2bedba5937d39f4e7c0570a886552fd26cc937c15f3faea952a9cfd1d\"" Feb 13 19:33:32.275288 containerd[1495]: time="2025-02-13T19:33:32.275243879Z" level=info msg="StartContainer for \"d32e3ba2bedba5937d39f4e7c0570a886552fd26cc937c15f3faea952a9cfd1d\"" Feb 13 19:33:32.280351 systemd[1]: Started cri-containerd-b23693f744642ba0106a5857b9fa3b26f8f7e171d44e7464f1f2f1c5e4c451b2.scope - libcontainer container b23693f744642ba0106a5857b9fa3b26f8f7e171d44e7464f1f2f1c5e4c451b2. Feb 13 19:33:32.284004 systemd[1]: Started cri-containerd-2d89baf860e63f0e5633baafd453935f96ce52c5166bc0f6bb1e74c68ae56c89.scope - libcontainer container 2d89baf860e63f0e5633baafd453935f96ce52c5166bc0f6bb1e74c68ae56c89. Feb 13 19:33:32.314757 systemd[1]: Started cri-containerd-d32e3ba2bedba5937d39f4e7c0570a886552fd26cc937c15f3faea952a9cfd1d.scope - libcontainer container d32e3ba2bedba5937d39f4e7c0570a886552fd26cc937c15f3faea952a9cfd1d. Feb 13 19:33:32.363658 containerd[1495]: time="2025-02-13T19:33:32.357551492Z" level=info msg="StartContainer for \"b23693f744642ba0106a5857b9fa3b26f8f7e171d44e7464f1f2f1c5e4c451b2\" returns successfully" Feb 13 19:33:32.368234 containerd[1495]: time="2025-02-13T19:33:32.368101103Z" level=info msg="StartContainer for \"2d89baf860e63f0e5633baafd453935f96ce52c5166bc0f6bb1e74c68ae56c89\" returns successfully" Feb 13 19:33:32.381085 kubelet[2262]: W0213 19:33:32.380966 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:32.381085 kubelet[2262]: E0213 19:33:32.381050 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:32.400048 kubelet[2262]: I0213 19:33:32.399813 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:32.400744 kubelet[2262]: E0213 19:33:32.400715 2262 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Feb 13 19:33:32.438870 kubelet[2262]: W0213 19:33:32.438730 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Feb 13 19:33:32.438870 kubelet[2262]: E0213 19:33:32.438820 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:32.482518 containerd[1495]: time="2025-02-13T19:33:32.482395542Z" level=info msg="StartContainer for \"d32e3ba2bedba5937d39f4e7c0570a886552fd26cc937c15f3faea952a9cfd1d\" returns successfully" Feb 13 19:33:32.486927 kubelet[2262]: E0213 19:33:32.486903 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:32.487277 kubelet[2262]: E0213 19:33:32.487178 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.489881 kubelet[2262]: E0213 19:33:32.489741 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:32.489881 kubelet[2262]: E0213 19:33:32.489837 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:32.493056 kubelet[2262]: E0213 19:33:32.492946 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:32.493250 kubelet[2262]: E0213 19:33:32.493176 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:33.496031 kubelet[2262]: E0213 19:33:33.495764 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:33.496031 kubelet[2262]: E0213 19:33:33.495915 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:33.496031 kubelet[2262]: E0213 19:33:33.495940 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:33.496737 kubelet[2262]: E0213 19:33:33.496081 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:33.496737 kubelet[2262]: E0213 19:33:33.496484 2262 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:33.496737 kubelet[2262]: E0213 19:33:33.496619 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:33.758932 kubelet[2262]: E0213 19:33:33.758781 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:33:33.969052 kubelet[2262]: E0213 19:33:33.968905 2262 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db7ca1f2e0bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:33:29.434620091 +0000 UTC m=+0.634615273,LastTimestamp:2025-02-13 19:33:29.434620091 +0000 UTC m=+0.634615273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:33:34.003180 kubelet[2262]: I0213 19:33:34.003104 2262 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:34.093175 kubelet[2262]: I0213 19:33:34.093125 2262 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:33:34.093175 kubelet[2262]: E0213 19:33:34.093184 2262 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:33:34.145380 kubelet[2262]: E0213 19:33:34.145212 2262 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db7ca2ad9b01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:33:29.446857473 +0000 UTC m=+0.646852665,LastTimestamp:2025-02-13 19:33:29.446857473 +0000 UTC m=+0.646852665,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:33:34.149464 kubelet[2262]: I0213 19:33:34.149292 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:34.197123 kubelet[2262]: E0213 19:33:34.197076 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:34.197123 kubelet[2262]: I0213 19:33:34.197117 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:34.198656 kubelet[2262]: E0213 19:33:34.198583 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:34.198656 kubelet[2262]: I0213 19:33:34.198605 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:34.200183 kubelet[2262]: E0213 19:33:34.200141 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:34.377392 kubelet[2262]: I0213 19:33:34.377277 2262 apiserver.go:52] "Watching apiserver" Feb 13 19:33:34.448437 kubelet[2262]: I0213 19:33:34.448387 2262 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:33:34.495690 kubelet[2262]: I0213 19:33:34.495646 2262 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:34.497741 kubelet[2262]: E0213 19:33:34.497712 2262 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:34.498061 kubelet[2262]: E0213 19:33:34.497901 2262 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:36.504625 update_engine[1483]: I20250213 19:33:36.504534 1483 update_attempter.cc:509] Updating boot flags... Feb 13 19:33:36.846244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2546) Feb 13 19:33:36.874239 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2550) Feb 13 19:33:36.917301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2550) Feb 13 19:33:39.287771 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-9.scope)... Feb 13 19:33:39.287789 systemd[1]: Reloading... Feb 13 19:33:39.374236 zram_generator::config[2598]: No configuration found. Feb 13 19:33:39.486011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:39.578183 systemd[1]: Reloading finished in 289 ms. Feb 13 19:33:39.618136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:39.635718 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:33:39.636014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:39.636097 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 132.2M memory peak, 0B memory swap peak. Feb 13 19:33:39.643890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:39.838140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:39.850742 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:33:39.918516 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:39.918516 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:33:39.918516 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:39.919051 kubelet[2640]: I0213 19:33:39.918602 2640 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:33:39.927603 kubelet[2640]: I0213 19:33:39.927529 2640 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:33:39.927603 kubelet[2640]: I0213 19:33:39.927573 2640 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:33:39.927976 kubelet[2640]: I0213 19:33:39.927829 2640 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:33:39.930067 kubelet[2640]: I0213 19:33:39.930044 2640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:33:39.933232 kubelet[2640]: I0213 19:33:39.933186 2640 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:33:39.946891 kubelet[2640]: E0213 19:33:39.946834 2640 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:33:39.946891 kubelet[2640]: I0213 19:33:39.946877 2640 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:33:39.952537 kubelet[2640]: I0213 19:33:39.952496 2640 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:33:39.953308 kubelet[2640]: I0213 19:33:39.952712 2640 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:33:39.953308 kubelet[2640]: I0213 19:33:39.952773 2640 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:33:39.953308 kubelet[2640]: I0213 19:33:39.952941 2640 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:33:39.953308 kubelet[2640]: I0213 19:33:39.952950 2640 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:33:39.953517 kubelet[2640]: I0213 19:33:39.952992 2640 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:39.953517 kubelet[2640]: I0213 19:33:39.953144 2640 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:33:39.953517 kubelet[2640]: I0213 19:33:39.953155 2640 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:33:39.953517 kubelet[2640]: I0213 19:33:39.953184 2640 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:33:39.953517 kubelet[2640]: I0213 19:33:39.953227 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:33:39.955261 kubelet[2640]: I0213 19:33:39.954788 2640 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:33:39.955261 kubelet[2640]: I0213 19:33:39.955109 2640 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:33:39.955545 kubelet[2640]: I0213 19:33:39.955527 2640 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:33:39.955612 kubelet[2640]: I0213 19:33:39.955554 2640 server.go:1287] "Started kubelet" Feb 13 19:33:39.955899 kubelet[2640]: I0213 19:33:39.955866 2640 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:33:39.957639 kubelet[2640]: I0213 19:33:39.956795 2640 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:33:39.957639 kubelet[2640]: I0213 19:33:39.957435 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:33:39.962245 kubelet[2640]: I0213 19:33:39.960274 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:33:39.962245 kubelet[2640]: I0213 19:33:39.960534 2640 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:33:39.967084 kubelet[2640]: E0213 19:33:39.966997 2640 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:39.967245 kubelet[2640]: I0213 19:33:39.967237 2640 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:33:39.967894 kubelet[2640]: I0213 19:33:39.967865 2640 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:33:39.967954 kubelet[2640]: I0213 19:33:39.967939 2640 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:33:39.968758 kubelet[2640]: I0213 19:33:39.968718 2640 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:33:39.968919 kubelet[2640]: I0213 19:33:39.968885 2640 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:33:39.969026 kubelet[2640]: I0213 19:33:39.969004 2640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:33:39.973121 kubelet[2640]: I0213 19:33:39.972456 2640 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:33:39.979274 kubelet[2640]: E0213 19:33:39.979222 2640 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:33:39.985540 kubelet[2640]: I0213 19:33:39.985025 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:33:39.987604 kubelet[2640]: I0213 19:33:39.987545 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:33:39.987604 kubelet[2640]: I0213 19:33:39.987594 2640 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:33:39.987981 kubelet[2640]: I0213 19:33:39.987620 2640 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:33:39.987981 kubelet[2640]: I0213 19:33:39.987630 2640 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:33:39.987981 kubelet[2640]: E0213 19:33:39.987683 2640 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:33:40.012784 kubelet[2640]: I0213 19:33:40.012755 2640 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:33:40.012972 kubelet[2640]: I0213 19:33:40.012958 2640 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:33:40.013060 kubelet[2640]: I0213 19:33:40.013048 2640 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:40.013339 kubelet[2640]: I0213 19:33:40.013321 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:33:40.013430 kubelet[2640]: I0213 19:33:40.013402 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:33:40.013755 kubelet[2640]: I0213 19:33:40.013483 2640 policy_none.go:49] "None policy: Start" Feb 13 19:33:40.013755 kubelet[2640]: I0213 19:33:40.013499 2640 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:33:40.013755 kubelet[2640]: I0213 19:33:40.013545 2640 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:33:40.013755 kubelet[2640]: I0213 19:33:40.013679 2640 state_mem.go:75] "Updated machine memory state" Feb 13 19:33:40.019155 kubelet[2640]: I0213 19:33:40.019133 2640 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:33:40.019613 kubelet[2640]: I0213 19:33:40.019598 2640 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:33:40.019763 kubelet[2640]: I0213 19:33:40.019731 2640 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:33:40.020024 kubelet[2640]: I0213 19:33:40.020010 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:33:40.020884 kubelet[2640]: E0213 19:33:40.020862 2640 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:33:40.089872 kubelet[2640]: I0213 19:33:40.089705 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:40.090012 kubelet[2640]: I0213 19:33:40.089955 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:40.090368 kubelet[2640]: I0213 19:33:40.090083 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.125080 kubelet[2640]: I0213 19:33:40.125026 2640 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:40.132099 kubelet[2640]: I0213 19:33:40.132058 2640 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:33:40.132307 kubelet[2640]: I0213 19:33:40.132155 2640 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:33:40.269800 kubelet[2640]: I0213 19:33:40.269708 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:40.269800 kubelet[2640]: I0213 19:33:40.269768 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.270022 kubelet[2640]: I0213 19:33:40.269822 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.270022 kubelet[2640]: I0213 19:33:40.269857 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.270022 kubelet[2640]: I0213 19:33:40.269880 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:40.270022 kubelet[2640]: I0213 19:33:40.269904 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0f4ea3951b904127cc6f707dbf92ec2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b0f4ea3951b904127cc6f707dbf92ec2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:40.270022 kubelet[2640]: I0213 19:33:40.269931 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.270240 kubelet[2640]: I0213 19:33:40.269958 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:40.270240 kubelet[2640]: I0213 19:33:40.269979 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:40.401145 kubelet[2640]: E0213 19:33:40.400732 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:40.401145 kubelet[2640]: E0213 19:33:40.400859 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:40.401145 kubelet[2640]: E0213 19:33:40.400892 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:40.955110 kubelet[2640]: I0213 19:33:40.955060 2640 apiserver.go:52] "Watching apiserver" Feb 13 19:33:40.969014 kubelet[2640]: I0213 19:33:40.968962 2640 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:33:40.998018 kubelet[2640]: I0213 19:33:40.997985 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:40.998187 kubelet[2640]: I0213 19:33:40.998161 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:40.998387 kubelet[2640]: I0213 19:33:40.998367 2640 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:41.006138 kubelet[2640]: E0213 19:33:41.006078 2640 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:41.006333 kubelet[2640]: E0213 19:33:41.006313 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:41.007036 kubelet[2640]: E0213 19:33:41.006782 2640 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:41.007036 kubelet[2640]: E0213 19:33:41.006890 2640 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:41.007036 kubelet[2640]: E0213 19:33:41.006941 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:41.007036 kubelet[2640]: E0213 19:33:41.007006 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:41.028209 kubelet[2640]: I0213 19:33:41.028110 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.028086794 podStartE2EDuration="1.028086794s" podCreationTimestamp="2025-02-13 19:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:41.020212386 +0000 UTC m=+1.153529182" watchObservedRunningTime="2025-02-13 19:33:41.028086794 +0000 UTC m=+1.161403579" Feb 13 19:33:41.028431 kubelet[2640]: I0213 19:33:41.028266 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.028260913 podStartE2EDuration="1.028260913s" podCreationTimestamp="2025-02-13 19:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:41.028181393 +0000 UTC m=+1.161498178" watchObservedRunningTime="2025-02-13 19:33:41.028260913 +0000 UTC m=+1.161577698" Feb 13 19:33:41.047452 kubelet[2640]: I0213 19:33:41.047386 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.047362235 podStartE2EDuration="1.047362235s" podCreationTimestamp="2025-02-13 19:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:41.035811017 +0000 UTC m=+1.169127802" watchObservedRunningTime="2025-02-13 19:33:41.047362235 +0000 UTC m=+1.180679020" Feb 13 19:33:42.000739 kubelet[2640]: E0213 19:33:42.000680 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:42.002228 kubelet[2640]: E0213 19:33:42.001395 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:42.002228 kubelet[2640]: E0213 19:33:42.001326 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:43.001529 kubelet[2640]: E0213 19:33:43.001490 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:43.051809 kubelet[2640]: E0213 19:33:43.051785 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:44.003087 kubelet[2640]: E0213 19:33:44.003012 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:44.003985 kubelet[2640]: E0213 19:33:44.003141 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:44.383684 sudo[1691]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:44.385586 sshd[1690]: Connection closed by 10.0.0.1 port 49572 Feb 13 19:33:44.386160 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:44.391357 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:49572.service: Deactivated successfully. Feb 13 19:33:44.395982 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:33:44.396421 systemd[1]: session-9.scope: Consumed 4.864s CPU time, 152.7M memory peak, 0B memory swap peak. Feb 13 19:33:44.398658 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:33:44.400171 systemd-logind[1480]: Removed session 9. Feb 13 19:33:44.547259 kubelet[2640]: I0213 19:33:44.547223 2640 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:33:44.547742 containerd[1495]: time="2025-02-13T19:33:44.547693556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:33:44.548174 kubelet[2640]: I0213 19:33:44.547923 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:33:45.005335 kubelet[2640]: E0213 19:33:45.005276 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:45.333621 systemd[1]: Created slice kubepods-besteffort-podf8dfc237_115f_451d_aacf_bc143aee62a2.slice - libcontainer container kubepods-besteffort-podf8dfc237_115f_451d_aacf_bc143aee62a2.slice. Feb 13 19:33:45.499599 kubelet[2640]: I0213 19:33:45.499508 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9gp\" (UniqueName: \"kubernetes.io/projected/f8dfc237-115f-451d-aacf-bc143aee62a2-kube-api-access-fg9gp\") pod \"kube-proxy-ndjwp\" (UID: \"f8dfc237-115f-451d-aacf-bc143aee62a2\") " pod="kube-system/kube-proxy-ndjwp" Feb 13 19:33:45.499599 kubelet[2640]: I0213 19:33:45.499598 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8dfc237-115f-451d-aacf-bc143aee62a2-kube-proxy\") pod \"kube-proxy-ndjwp\" (UID: \"f8dfc237-115f-451d-aacf-bc143aee62a2\") " pod="kube-system/kube-proxy-ndjwp" Feb 13 19:33:45.499819 kubelet[2640]: I0213 19:33:45.499633 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8dfc237-115f-451d-aacf-bc143aee62a2-lib-modules\") pod \"kube-proxy-ndjwp\" (UID: \"f8dfc237-115f-451d-aacf-bc143aee62a2\") " pod="kube-system/kube-proxy-ndjwp" Feb 13 19:33:45.499819 kubelet[2640]: I0213 19:33:45.499710 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8dfc237-115f-451d-aacf-bc143aee62a2-xtables-lock\") pod \"kube-proxy-ndjwp\" (UID: \"f8dfc237-115f-451d-aacf-bc143aee62a2\") " pod="kube-system/kube-proxy-ndjwp" Feb 13 19:33:45.613829 systemd[1]: Created slice kubepods-besteffort-pod65b5596e_f142_4af4_801a_70a7eec3c34f.slice - libcontainer container kubepods-besteffort-pod65b5596e_f142_4af4_801a_70a7eec3c34f.slice. Feb 13 19:33:45.647785 kubelet[2640]: E0213 19:33:45.647708 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:45.648598 containerd[1495]: time="2025-02-13T19:33:45.648492981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndjwp,Uid:f8dfc237-115f-451d-aacf-bc143aee62a2,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:45.685912 containerd[1495]: time="2025-02-13T19:33:45.685768142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:45.686162 containerd[1495]: time="2025-02-13T19:33:45.685892447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:45.686162 containerd[1495]: time="2025-02-13T19:33:45.686122611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:45.687042 containerd[1495]: time="2025-02-13T19:33:45.686974981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:45.701764 kubelet[2640]: I0213 19:33:45.701699 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2lw\" (UniqueName: \"kubernetes.io/projected/65b5596e-f142-4af4-801a-70a7eec3c34f-kube-api-access-rr2lw\") pod \"tigera-operator-7d68577dc5-s9p8g\" (UID: \"65b5596e-f142-4af4-801a-70a7eec3c34f\") " pod="tigera-operator/tigera-operator-7d68577dc5-s9p8g" Feb 13 19:33:45.701764 kubelet[2640]: I0213 19:33:45.701763 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65b5596e-f142-4af4-801a-70a7eec3c34f-var-lib-calico\") pod \"tigera-operator-7d68577dc5-s9p8g\" (UID: \"65b5596e-f142-4af4-801a-70a7eec3c34f\") " pod="tigera-operator/tigera-operator-7d68577dc5-s9p8g" Feb 13 19:33:45.710425 systemd[1]: Started cri-containerd-23187416481b357f291b69e98060bd4a2a61e4a3661d2dd0e67d7ec2017c17fc.scope - libcontainer container 23187416481b357f291b69e98060bd4a2a61e4a3661d2dd0e67d7ec2017c17fc. Feb 13 19:33:45.734600 containerd[1495]: time="2025-02-13T19:33:45.734560448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndjwp,Uid:f8dfc237-115f-451d-aacf-bc143aee62a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"23187416481b357f291b69e98060bd4a2a61e4a3661d2dd0e67d7ec2017c17fc\"" Feb 13 19:33:45.735620 kubelet[2640]: E0213 19:33:45.735583 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:45.738567 containerd[1495]: time="2025-02-13T19:33:45.738399073Z" level=info msg="CreateContainer within sandbox \"23187416481b357f291b69e98060bd4a2a61e4a3661d2dd0e67d7ec2017c17fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:33:45.762667 containerd[1495]: time="2025-02-13T19:33:45.761974301Z" level=info msg="CreateContainer within sandbox \"23187416481b357f291b69e98060bd4a2a61e4a3661d2dd0e67d7ec2017c17fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b16e301e00d090865f62ab362774b436e61126ab554cf241c172938937bc3dc7\"" Feb 13 19:33:45.762946 containerd[1495]: time="2025-02-13T19:33:45.762826349Z" level=info msg="StartContainer for \"b16e301e00d090865f62ab362774b436e61126ab554cf241c172938937bc3dc7\"" Feb 13 19:33:45.794417 systemd[1]: Started cri-containerd-b16e301e00d090865f62ab362774b436e61126ab554cf241c172938937bc3dc7.scope - libcontainer container b16e301e00d090865f62ab362774b436e61126ab554cf241c172938937bc3dc7. Feb 13 19:33:45.905284 containerd[1495]: time="2025-02-13T19:33:45.904961272Z" level=info msg="StartContainer for \"b16e301e00d090865f62ab362774b436e61126ab554cf241c172938937bc3dc7\" returns successfully" Feb 13 19:33:45.922375 containerd[1495]: time="2025-02-13T19:33:45.922302486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-s9p8g,Uid:65b5596e-f142-4af4-801a-70a7eec3c34f,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:33:46.008845 kubelet[2640]: E0213 19:33:46.008804 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:46.009391 kubelet[2640]: E0213 19:33:46.009037 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:46.470046 containerd[1495]: time="2025-02-13T19:33:46.469928659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:46.470046 containerd[1495]: time="2025-02-13T19:33:46.469984224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:46.470046 containerd[1495]: time="2025-02-13T19:33:46.469994313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:46.470330 containerd[1495]: time="2025-02-13T19:33:46.470085525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:46.490347 systemd[1]: Started cri-containerd-a5e3943f8ef564828d78fdb7e6b360c2c16560afe00cad0b3dc3c602b5972aea.scope - libcontainer container a5e3943f8ef564828d78fdb7e6b360c2c16560afe00cad0b3dc3c602b5972aea. Feb 13 19:33:46.526626 containerd[1495]: time="2025-02-13T19:33:46.526571338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-s9p8g,Uid:65b5596e-f142-4af4-801a-70a7eec3c34f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a5e3943f8ef564828d78fdb7e6b360c2c16560afe00cad0b3dc3c602b5972aea\"" Feb 13 19:33:46.528329 containerd[1495]: time="2025-02-13T19:33:46.528269712Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:33:48.920119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746417639.mount: Deactivated successfully. Feb 13 19:33:49.157864 kubelet[2640]: E0213 19:33:49.157447 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:49.173566 kubelet[2640]: I0213 19:33:49.173391 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ndjwp" podStartSLOduration=4.173368305 podStartE2EDuration="4.173368305s" podCreationTimestamp="2025-02-13 19:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:33:46.083451811 +0000 UTC m=+6.216768597" watchObservedRunningTime="2025-02-13 19:33:49.173368305 +0000 UTC m=+9.306685100" Feb 13 19:33:49.475653 containerd[1495]: time="2025-02-13T19:33:49.475488839Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.476338 containerd[1495]: time="2025-02-13T19:33:49.476282465Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:33:49.477575 containerd[1495]: time="2025-02-13T19:33:49.477546437Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.480567 containerd[1495]: time="2025-02-13T19:33:49.480518357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:49.481279 containerd[1495]: time="2025-02-13T19:33:49.481249345Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.952910374s" Feb 13 19:33:49.481330 containerd[1495]: time="2025-02-13T19:33:49.481285543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:33:49.484050 containerd[1495]: time="2025-02-13T19:33:49.483993536Z" level=info msg="CreateContainer within sandbox \"a5e3943f8ef564828d78fdb7e6b360c2c16560afe00cad0b3dc3c602b5972aea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:33:49.497347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346385915.mount: Deactivated successfully. Feb 13 19:33:49.499244 containerd[1495]: time="2025-02-13T19:33:49.499178411Z" level=info msg="CreateContainer within sandbox \"a5e3943f8ef564828d78fdb7e6b360c2c16560afe00cad0b3dc3c602b5972aea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfadd82a56b3330a74f32eec242bbd44384b50e9ddd4055d7fd2ca08dae60346\"" Feb 13 19:33:49.499881 containerd[1495]: time="2025-02-13T19:33:49.499848343Z" level=info msg="StartContainer for \"dfadd82a56b3330a74f32eec242bbd44384b50e9ddd4055d7fd2ca08dae60346\"" Feb 13 19:33:49.531375 systemd[1]: Started cri-containerd-dfadd82a56b3330a74f32eec242bbd44384b50e9ddd4055d7fd2ca08dae60346.scope - libcontainer container dfadd82a56b3330a74f32eec242bbd44384b50e9ddd4055d7fd2ca08dae60346. Feb 13 19:33:49.593497 containerd[1495]: time="2025-02-13T19:33:49.593420413Z" level=info msg="StartContainer for \"dfadd82a56b3330a74f32eec242bbd44384b50e9ddd4055d7fd2ca08dae60346\" returns successfully" Feb 13 19:33:50.022380 kubelet[2640]: E0213 19:33:50.022289 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:51.023781 kubelet[2640]: E0213 19:33:51.023747 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:53.432794 kubelet[2640]: I0213 19:33:53.431780 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-s9p8g" podStartSLOduration=5.477428619 podStartE2EDuration="8.431757516s" podCreationTimestamp="2025-02-13 19:33:45 +0000 UTC" firstStartedPulling="2025-02-13 19:33:46.52775491 +0000 UTC m=+6.661071695" lastFinishedPulling="2025-02-13 19:33:49.482083807 +0000 UTC m=+9.615400592" observedRunningTime="2025-02-13 19:33:50.032985976 +0000 UTC m=+10.166302761" watchObservedRunningTime="2025-02-13 19:33:53.431757516 +0000 UTC m=+13.565074301" Feb 13 19:33:53.442816 systemd[1]: Created slice kubepods-besteffort-pod7b3bc524_6cce_4157_93cb_5afb17b3d852.slice - libcontainer container kubepods-besteffort-pod7b3bc524_6cce_4157_93cb_5afb17b3d852.slice. Feb 13 19:33:53.453166 kubelet[2640]: I0213 19:33:53.453131 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b3bc524-6cce-4157-93cb-5afb17b3d852-typha-certs\") pod \"calico-typha-748559d5c4-mp6fj\" (UID: \"7b3bc524-6cce-4157-93cb-5afb17b3d852\") " pod="calico-system/calico-typha-748559d5c4-mp6fj" Feb 13 19:33:53.453166 kubelet[2640]: I0213 19:33:53.453165 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r422k\" (UniqueName: \"kubernetes.io/projected/7b3bc524-6cce-4157-93cb-5afb17b3d852-kube-api-access-r422k\") pod \"calico-typha-748559d5c4-mp6fj\" (UID: \"7b3bc524-6cce-4157-93cb-5afb17b3d852\") " pod="calico-system/calico-typha-748559d5c4-mp6fj" Feb 13 19:33:53.453371 kubelet[2640]: I0213 19:33:53.453183 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b3bc524-6cce-4157-93cb-5afb17b3d852-tigera-ca-bundle\") pod \"calico-typha-748559d5c4-mp6fj\" (UID: \"7b3bc524-6cce-4157-93cb-5afb17b3d852\") " pod="calico-system/calico-typha-748559d5c4-mp6fj" Feb 13 19:33:53.746902 kubelet[2640]: E0213 19:33:53.746790 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:53.747398 containerd[1495]: time="2025-02-13T19:33:53.747331180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748559d5c4-mp6fj,Uid:7b3bc524-6cce-4157-93cb-5afb17b3d852,Namespace:calico-system,Attempt:0,}" Feb 13 19:33:54.146375 containerd[1495]: time="2025-02-13T19:33:54.146237765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:54.146375 containerd[1495]: time="2025-02-13T19:33:54.146288220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:54.146375 containerd[1495]: time="2025-02-13T19:33:54.146297738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:54.146617 containerd[1495]: time="2025-02-13T19:33:54.146382377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:54.169731 systemd[1]: Started cri-containerd-cc508a2a5f33d8e84f8c99ba91d645df470ab3119e8463745d184605aff4f399.scope - libcontainer container cc508a2a5f33d8e84f8c99ba91d645df470ab3119e8463745d184605aff4f399. Feb 13 19:33:54.209323 containerd[1495]: time="2025-02-13T19:33:54.209255990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-748559d5c4-mp6fj,Uid:7b3bc524-6cce-4157-93cb-5afb17b3d852,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc508a2a5f33d8e84f8c99ba91d645df470ab3119e8463745d184605aff4f399\"" Feb 13 19:33:54.210133 kubelet[2640]: E0213 19:33:54.210088 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:54.211585 containerd[1495]: time="2025-02-13T19:33:54.211563773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:33:55.064828 kubelet[2640]: I0213 19:33:55.064365 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-xtables-lock\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.064828 kubelet[2640]: I0213 19:33:55.064417 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9z9\" (UniqueName: \"kubernetes.io/projected/86f48d19-cc07-46f8-8ca7-06a1effb72ea-kube-api-access-bl9z9\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.064828 kubelet[2640]: I0213 19:33:55.064453 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-policysync\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.064828 kubelet[2640]: I0213 19:33:55.064474 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-var-run-calico\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.064828 kubelet[2640]: I0213 19:33:55.064493 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-var-lib-calico\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065496 kubelet[2640]: I0213 19:33:55.064514 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-cni-bin-dir\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065496 kubelet[2640]: I0213 19:33:55.064535 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-cni-net-dir\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065496 kubelet[2640]: I0213 19:33:55.064554 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-flexvol-driver-host\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065496 kubelet[2640]: I0213 19:33:55.064577 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f48d19-cc07-46f8-8ca7-06a1effb72ea-tigera-ca-bundle\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065496 kubelet[2640]: I0213 19:33:55.064597 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86f48d19-cc07-46f8-8ca7-06a1effb72ea-node-certs\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065667 kubelet[2640]: I0213 19:33:55.064627 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-cni-log-dir\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.065667 kubelet[2640]: I0213 19:33:55.064648 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f48d19-cc07-46f8-8ca7-06a1effb72ea-lib-modules\") pod \"calico-node-d7xzm\" (UID: \"86f48d19-cc07-46f8-8ca7-06a1effb72ea\") " pod="calico-system/calico-node-d7xzm" Feb 13 19:33:55.071948 systemd[1]: Created slice kubepods-besteffort-pod86f48d19_cc07_46f8_8ca7_06a1effb72ea.slice - libcontainer container kubepods-besteffort-pod86f48d19_cc07_46f8_8ca7_06a1effb72ea.slice. Feb 13 19:33:55.167443 kubelet[2640]: E0213 19:33:55.167353 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.167443 kubelet[2640]: W0213 19:33:55.167385 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.167443 kubelet[2640]: E0213 19:33:55.167419 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.170059 kubelet[2640]: E0213 19:33:55.170022 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.170059 kubelet[2640]: W0213 19:33:55.170054 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.170131 kubelet[2640]: E0213 19:33:55.170081 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.285123 kubelet[2640]: E0213 19:33:55.283374 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.285123 kubelet[2640]: W0213 19:33:55.283411 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.285123 kubelet[2640]: E0213 19:33:55.283472 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.306287 kubelet[2640]: E0213 19:33:55.305810 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:33:55.366490 kubelet[2640]: E0213 19:33:55.366329 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.366490 kubelet[2640]: W0213 19:33:55.366360 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.366490 kubelet[2640]: E0213 19:33:55.366384 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.366675 kubelet[2640]: E0213 19:33:55.366599 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.366675 kubelet[2640]: W0213 19:33:55.366609 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.366675 kubelet[2640]: E0213 19:33:55.366620 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.366799 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.367948 kubelet[2640]: W0213 19:33:55.366812 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.366821 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.367039 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.367948 kubelet[2640]: W0213 19:33:55.367047 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.367056 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.367275 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.367948 kubelet[2640]: W0213 19:33:55.367301 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.367312 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.367948 kubelet[2640]: E0213 19:33:55.367502 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.368186 kubelet[2640]: W0213 19:33:55.367510 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.368186 kubelet[2640]: E0213 19:33:55.367537 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368336 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369215 kubelet[2640]: W0213 19:33:55.368348 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368357 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368549 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369215 kubelet[2640]: W0213 19:33:55.368557 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368565 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368755 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369215 kubelet[2640]: W0213 19:33:55.368763 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368771 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369215 kubelet[2640]: E0213 19:33:55.368967 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369482 kubelet[2640]: W0213 19:33:55.368974 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369482 kubelet[2640]: E0213 19:33:55.368984 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369482 kubelet[2640]: E0213 19:33:55.369166 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369482 kubelet[2640]: W0213 19:33:55.369174 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369482 kubelet[2640]: E0213 19:33:55.369183 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369482 kubelet[2640]: E0213 19:33:55.369416 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369482 kubelet[2640]: W0213 19:33:55.369425 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369482 kubelet[2640]: E0213 19:33:55.369434 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.369809 kubelet[2640]: E0213 19:33:55.369789 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.369809 kubelet[2640]: W0213 19:33:55.369798 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.369809 kubelet[2640]: E0213 19:33:55.369807 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.371106 kubelet[2640]: E0213 19:33:55.370220 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.371106 kubelet[2640]: W0213 19:33:55.370233 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.371106 kubelet[2640]: E0213 19:33:55.370241 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.371106 kubelet[2640]: E0213 19:33:55.370589 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.371106 kubelet[2640]: W0213 19:33:55.370597 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.371106 kubelet[2640]: E0213 19:33:55.370606 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.372294 kubelet[2640]: E0213 19:33:55.372267 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.372294 kubelet[2640]: W0213 19:33:55.372284 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.372374 kubelet[2640]: E0213 19:33:55.372294 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.372666 kubelet[2640]: E0213 19:33:55.372644 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.372666 kubelet[2640]: W0213 19:33:55.372659 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.372738 kubelet[2640]: E0213 19:33:55.372668 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.372883 kubelet[2640]: E0213 19:33:55.372858 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.372883 kubelet[2640]: W0213 19:33:55.372872 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.372883 kubelet[2640]: E0213 19:33:55.372882 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.374212 kubelet[2640]: E0213 19:33:55.373227 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.374212 kubelet[2640]: W0213 19:33:55.373241 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.374212 kubelet[2640]: E0213 19:33:55.373254 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.374387 kubelet[2640]: E0213 19:33:55.374364 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.374387 kubelet[2640]: W0213 19:33:55.374379 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.374455 kubelet[2640]: E0213 19:33:55.374403 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.374751 kubelet[2640]: E0213 19:33:55.374690 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:55.376783 containerd[1495]: time="2025-02-13T19:33:55.376738763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7xzm,Uid:86f48d19-cc07-46f8-8ca7-06a1effb72ea,Namespace:calico-system,Attempt:0,}" Feb 13 19:33:55.377092 kubelet[2640]: E0213 19:33:55.376855 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.377092 kubelet[2640]: W0213 19:33:55.376866 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.377987 kubelet[2640]: E0213 19:33:55.377959 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.378032 kubelet[2640]: I0213 19:33:55.378001 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1b3660a1-47a7-4062-b8e4-0e63486cf899-varrun\") pod \"csi-node-driver-mrdz6\" (UID: \"1b3660a1-47a7-4062-b8e4-0e63486cf899\") " pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:33:55.378836 kubelet[2640]: E0213 19:33:55.378793 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.378892 kubelet[2640]: W0213 19:33:55.378863 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.379238 kubelet[2640]: E0213 19:33:55.378972 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.379238 kubelet[2640]: I0213 19:33:55.378994 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1b3660a1-47a7-4062-b8e4-0e63486cf899-registration-dir\") pod \"csi-node-driver-mrdz6\" (UID: \"1b3660a1-47a7-4062-b8e4-0e63486cf899\") " pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:33:55.382228 kubelet[2640]: E0213 19:33:55.381437 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.382228 kubelet[2640]: W0213 19:33:55.381456 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.382228 kubelet[2640]: E0213 19:33:55.381600 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.382228 kubelet[2640]: I0213 19:33:55.381712 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxzt\" (UniqueName: \"kubernetes.io/projected/1b3660a1-47a7-4062-b8e4-0e63486cf899-kube-api-access-dqxzt\") pod \"csi-node-driver-mrdz6\" (UID: \"1b3660a1-47a7-4062-b8e4-0e63486cf899\") " pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:33:55.385477 kubelet[2640]: E0213 19:33:55.385424 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.385535 kubelet[2640]: W0213 19:33:55.385483 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.385753 kubelet[2640]: E0213 19:33:55.385725 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.385964 kubelet[2640]: E0213 19:33:55.385924 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.385964 kubelet[2640]: W0213 19:33:55.385962 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.386084 kubelet[2640]: E0213 19:33:55.386030 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.386344 kubelet[2640]: E0213 19:33:55.386321 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.386344 kubelet[2640]: W0213 19:33:55.386337 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.386846 kubelet[2640]: E0213 19:33:55.386809 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.386887 kubelet[2640]: E0213 19:33:55.386857 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.386887 kubelet[2640]: W0213 19:33:55.386867 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.386942 kubelet[2640]: I0213 19:33:55.386878 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b3660a1-47a7-4062-b8e4-0e63486cf899-kubelet-dir\") pod \"csi-node-driver-mrdz6\" (UID: \"1b3660a1-47a7-4062-b8e4-0e63486cf899\") " pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:33:55.386942 kubelet[2640]: E0213 19:33:55.386909 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387114 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.388210 kubelet[2640]: W0213 19:33:55.387128 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387156 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387423 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.388210 kubelet[2640]: W0213 19:33:55.387434 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387445 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387641 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.388210 kubelet[2640]: W0213 19:33:55.387649 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387659 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.388210 kubelet[2640]: E0213 19:33:55.387846 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.388461 kubelet[2640]: W0213 19:33:55.387854 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.388461 kubelet[2640]: E0213 19:33:55.387864 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.388461 kubelet[2640]: I0213 19:33:55.387886 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1b3660a1-47a7-4062-b8e4-0e63486cf899-socket-dir\") pod \"csi-node-driver-mrdz6\" (UID: \"1b3660a1-47a7-4062-b8e4-0e63486cf899\") " pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:33:55.389144 kubelet[2640]: E0213 19:33:55.389117 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.389144 kubelet[2640]: W0213 19:33:55.389137 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.389231 kubelet[2640]: E0213 19:33:55.389148 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.390086 kubelet[2640]: E0213 19:33:55.390049 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.390133 kubelet[2640]: W0213 19:33:55.390080 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.390328 kubelet[2640]: E0213 19:33:55.390298 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.393463 kubelet[2640]: E0213 19:33:55.393433 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.393463 kubelet[2640]: W0213 19:33:55.393458 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.393553 kubelet[2640]: E0213 19:33:55.393475 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.393973 kubelet[2640]: E0213 19:33:55.393946 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.393973 kubelet[2640]: W0213 19:33:55.393969 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.394041 kubelet[2640]: E0213 19:33:55.393983 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.421331 containerd[1495]: time="2025-02-13T19:33:55.421148150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:55.421331 containerd[1495]: time="2025-02-13T19:33:55.421253358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:55.421331 containerd[1495]: time="2025-02-13T19:33:55.421268937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:55.421925 containerd[1495]: time="2025-02-13T19:33:55.421382000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:55.445382 systemd[1]: Started cri-containerd-c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7.scope - libcontainer container c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7. Feb 13 19:33:55.479312 containerd[1495]: time="2025-02-13T19:33:55.479253055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7xzm,Uid:86f48d19-cc07-46f8-8ca7-06a1effb72ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\"" Feb 13 19:33:55.480360 kubelet[2640]: E0213 19:33:55.480339 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:55.489943 kubelet[2640]: E0213 19:33:55.489918 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.489943 kubelet[2640]: W0213 19:33:55.489940 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.490017 kubelet[2640]: E0213 19:33:55.489959 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.490217 kubelet[2640]: E0213 19:33:55.490188 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.490217 kubelet[2640]: W0213 19:33:55.490212 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.490283 kubelet[2640]: E0213 19:33:55.490234 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.490676 kubelet[2640]: E0213 19:33:55.490659 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.490676 kubelet[2640]: W0213 19:33:55.490672 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.490739 kubelet[2640]: E0213 19:33:55.490688 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.490996 kubelet[2640]: E0213 19:33:55.490960 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.490996 kubelet[2640]: W0213 19:33:55.490976 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.490996 kubelet[2640]: E0213 19:33:55.490987 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.491255 kubelet[2640]: E0213 19:33:55.491238 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.491255 kubelet[2640]: W0213 19:33:55.491250 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.491324 kubelet[2640]: E0213 19:33:55.491265 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.491531 kubelet[2640]: E0213 19:33:55.491515 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.491531 kubelet[2640]: W0213 19:33:55.491527 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.491709 kubelet[2640]: E0213 19:33:55.491617 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.491756 kubelet[2640]: E0213 19:33:55.491740 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.491756 kubelet[2640]: W0213 19:33:55.491750 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.491846 kubelet[2640]: E0213 19:33:55.491808 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.492013 kubelet[2640]: E0213 19:33:55.491993 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.492013 kubelet[2640]: W0213 19:33:55.492008 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.492095 kubelet[2640]: E0213 19:33:55.492074 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.492327 kubelet[2640]: E0213 19:33:55.492310 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.492327 kubelet[2640]: W0213 19:33:55.492325 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.492461 kubelet[2640]: E0213 19:33:55.492424 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.492636 kubelet[2640]: E0213 19:33:55.492620 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.492636 kubelet[2640]: W0213 19:33:55.492632 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.492687 kubelet[2640]: E0213 19:33:55.492672 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.492911 kubelet[2640]: E0213 19:33:55.492881 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.492911 kubelet[2640]: W0213 19:33:55.492894 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.493100 kubelet[2640]: E0213 19:33:55.493028 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.493202 kubelet[2640]: E0213 19:33:55.493172 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.493202 kubelet[2640]: W0213 19:33:55.493184 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.493269 kubelet[2640]: E0213 19:33:55.493248 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.493641 kubelet[2640]: E0213 19:33:55.493452 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.493641 kubelet[2640]: W0213 19:33:55.493467 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.493641 kubelet[2640]: E0213 19:33:55.493501 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.493725 kubelet[2640]: E0213 19:33:55.493703 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.493725 kubelet[2640]: W0213 19:33:55.493712 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.493765 kubelet[2640]: E0213 19:33:55.493736 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.494228 kubelet[2640]: E0213 19:33:55.493932 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.494228 kubelet[2640]: W0213 19:33:55.493945 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.494228 kubelet[2640]: E0213 19:33:55.494015 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.494315 kubelet[2640]: E0213 19:33:55.494241 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.494315 kubelet[2640]: W0213 19:33:55.494251 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.494315 kubelet[2640]: E0213 19:33:55.494269 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.494704 kubelet[2640]: E0213 19:33:55.494592 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.494704 kubelet[2640]: W0213 19:33:55.494606 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.495230 kubelet[2640]: E0213 19:33:55.494988 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.495380 kubelet[2640]: E0213 19:33:55.495356 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.495380 kubelet[2640]: W0213 19:33:55.495372 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.495469 kubelet[2640]: E0213 19:33:55.495432 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.495648 kubelet[2640]: E0213 19:33:55.495628 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.495648 kubelet[2640]: W0213 19:33:55.495640 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.495743 kubelet[2640]: E0213 19:33:55.495724 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.496074 kubelet[2640]: E0213 19:33:55.496014 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.496074 kubelet[2640]: W0213 19:33:55.496028 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.496139 kubelet[2640]: E0213 19:33:55.496077 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.496615 kubelet[2640]: E0213 19:33:55.496311 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.496615 kubelet[2640]: W0213 19:33:55.496329 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.496615 kubelet[2640]: E0213 19:33:55.496367 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.496615 kubelet[2640]: E0213 19:33:55.496608 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.496615 kubelet[2640]: W0213 19:33:55.496618 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.496731 kubelet[2640]: E0213 19:33:55.496656 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.497145 kubelet[2640]: E0213 19:33:55.496854 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.497145 kubelet[2640]: W0213 19:33:55.496866 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.497145 kubelet[2640]: E0213 19:33:55.496881 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.497318 kubelet[2640]: E0213 19:33:55.497294 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.497348 kubelet[2640]: W0213 19:33:55.497317 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.497348 kubelet[2640]: E0213 19:33:55.497335 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.497705 kubelet[2640]: E0213 19:33:55.497676 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.497705 kubelet[2640]: W0213 19:33:55.497695 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.497761 kubelet[2640]: E0213 19:33:55.497709 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:55.505682 kubelet[2640]: E0213 19:33:55.505618 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:55.505682 kubelet[2640]: W0213 19:33:55.505635 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:55.505682 kubelet[2640]: E0213 19:33:55.505651 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:56.624686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743017002.mount: Deactivated successfully. Feb 13 19:33:56.988620 kubelet[2640]: E0213 19:33:56.988456 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:33:57.438319 containerd[1495]: time="2025-02-13T19:33:57.438242695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:57.439212 containerd[1495]: time="2025-02-13T19:33:57.439116919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:33:57.440546 containerd[1495]: time="2025-02-13T19:33:57.440520479Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:57.442709 containerd[1495]: time="2025-02-13T19:33:57.442662889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:57.443544 containerd[1495]: time="2025-02-13T19:33:57.443496887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.231905041s" Feb 13 19:33:57.443544 containerd[1495]: time="2025-02-13T19:33:57.443543905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:33:57.444888 containerd[1495]: time="2025-02-13T19:33:57.444840013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:33:57.454781 containerd[1495]: time="2025-02-13T19:33:57.454726457Z" level=info msg="CreateContainer within sandbox \"cc508a2a5f33d8e84f8c99ba91d645df470ab3119e8463745d184605aff4f399\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:33:57.505433 containerd[1495]: time="2025-02-13T19:33:57.505359016Z" level=info msg="CreateContainer within sandbox \"cc508a2a5f33d8e84f8c99ba91d645df470ab3119e8463745d184605aff4f399\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"08b54277840d6047538cc8aaf023a88ebd357d26c48220e2160da5ce70620543\"" Feb 13 19:33:57.506108 containerd[1495]: time="2025-02-13T19:33:57.506055084Z" level=info msg="StartContainer for \"08b54277840d6047538cc8aaf023a88ebd357d26c48220e2160da5ce70620543\"" Feb 13 19:33:57.540397 systemd[1]: Started cri-containerd-08b54277840d6047538cc8aaf023a88ebd357d26c48220e2160da5ce70620543.scope - libcontainer container 08b54277840d6047538cc8aaf023a88ebd357d26c48220e2160da5ce70620543. Feb 13 19:33:57.615963 containerd[1495]: time="2025-02-13T19:33:57.615824105Z" level=info msg="StartContainer for \"08b54277840d6047538cc8aaf023a88ebd357d26c48220e2160da5ce70620543\" returns successfully" Feb 13 19:33:58.041961 kubelet[2640]: E0213 19:33:58.041675 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:58.091027 kubelet[2640]: E0213 19:33:58.090985 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.091027 kubelet[2640]: W0213 19:33:58.091011 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.091027 kubelet[2640]: E0213 19:33:58.091036 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.091275 kubelet[2640]: E0213 19:33:58.091254 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.091275 kubelet[2640]: W0213 19:33:58.091267 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.091336 kubelet[2640]: E0213 19:33:58.091277 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.091560 kubelet[2640]: E0213 19:33:58.091539 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.091560 kubelet[2640]: W0213 19:33:58.091551 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.091624 kubelet[2640]: E0213 19:33:58.091563 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.091832 kubelet[2640]: E0213 19:33:58.091811 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.091832 kubelet[2640]: W0213 19:33:58.091823 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.091904 kubelet[2640]: E0213 19:33:58.091833 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.092081 kubelet[2640]: E0213 19:33:58.092066 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.092081 kubelet[2640]: W0213 19:33:58.092077 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.092143 kubelet[2640]: E0213 19:33:58.092086 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.092320 kubelet[2640]: E0213 19:33:58.092303 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.092320 kubelet[2640]: W0213 19:33:58.092316 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.092396 kubelet[2640]: E0213 19:33:58.092328 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.092530 kubelet[2640]: E0213 19:33:58.092516 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.092530 kubelet[2640]: W0213 19:33:58.092526 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.092587 kubelet[2640]: E0213 19:33:58.092535 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.092752 kubelet[2640]: E0213 19:33:58.092738 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.092752 kubelet[2640]: W0213 19:33:58.092749 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.092805 kubelet[2640]: E0213 19:33:58.092760 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.092970 kubelet[2640]: E0213 19:33:58.092958 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.092970 kubelet[2640]: W0213 19:33:58.092966 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.093128 kubelet[2640]: E0213 19:33:58.092973 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.093153 kubelet[2640]: E0213 19:33:58.093134 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.093153 kubelet[2640]: W0213 19:33:58.093142 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.093211 kubelet[2640]: E0213 19:33:58.093151 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.093388 kubelet[2640]: E0213 19:33:58.093375 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.093388 kubelet[2640]: W0213 19:33:58.093386 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.093441 kubelet[2640]: E0213 19:33:58.093394 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.093576 kubelet[2640]: E0213 19:33:58.093565 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.093602 kubelet[2640]: W0213 19:33:58.093575 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.093602 kubelet[2640]: E0213 19:33:58.093584 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.093805 kubelet[2640]: E0213 19:33:58.093794 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.093841 kubelet[2640]: W0213 19:33:58.093805 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.093841 kubelet[2640]: E0213 19:33:58.093813 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.094003 kubelet[2640]: E0213 19:33:58.093992 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.094003 kubelet[2640]: W0213 19:33:58.094001 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.094044 kubelet[2640]: E0213 19:33:58.094010 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.094228 kubelet[2640]: E0213 19:33:58.094217 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.094264 kubelet[2640]: W0213 19:33:58.094228 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.094264 kubelet[2640]: E0213 19:33:58.094237 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.107117 kubelet[2640]: E0213 19:33:58.107076 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.107117 kubelet[2640]: W0213 19:33:58.107094 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.107117 kubelet[2640]: E0213 19:33:58.107114 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.107399 kubelet[2640]: E0213 19:33:58.107368 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.107399 kubelet[2640]: W0213 19:33:58.107381 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.107399 kubelet[2640]: E0213 19:33:58.107398 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.107661 kubelet[2640]: E0213 19:33:58.107635 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.107661 kubelet[2640]: W0213 19:33:58.107650 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.107743 kubelet[2640]: E0213 19:33:58.107666 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.107902 kubelet[2640]: E0213 19:33:58.107878 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.107902 kubelet[2640]: W0213 19:33:58.107890 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.107974 kubelet[2640]: E0213 19:33:58.107904 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.108123 kubelet[2640]: E0213 19:33:58.108099 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.108123 kubelet[2640]: W0213 19:33:58.108111 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.108184 kubelet[2640]: E0213 19:33:58.108124 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.108391 kubelet[2640]: E0213 19:33:58.108347 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.108391 kubelet[2640]: W0213 19:33:58.108379 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.108464 kubelet[2640]: E0213 19:33:58.108395 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.108639 kubelet[2640]: E0213 19:33:58.108622 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.108639 kubelet[2640]: W0213 19:33:58.108634 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.108712 kubelet[2640]: E0213 19:33:58.108663 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.108864 kubelet[2640]: E0213 19:33:58.108847 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.108864 kubelet[2640]: W0213 19:33:58.108859 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.108934 kubelet[2640]: E0213 19:33:58.108886 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.109097 kubelet[2640]: E0213 19:33:58.109080 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.109097 kubelet[2640]: W0213 19:33:58.109091 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.109160 kubelet[2640]: E0213 19:33:58.109105 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.109399 kubelet[2640]: E0213 19:33:58.109380 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.109399 kubelet[2640]: W0213 19:33:58.109394 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.109496 kubelet[2640]: E0213 19:33:58.109410 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.109626 kubelet[2640]: E0213 19:33:58.109612 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.109626 kubelet[2640]: W0213 19:33:58.109623 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.109684 kubelet[2640]: E0213 19:33:58.109636 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.109852 kubelet[2640]: E0213 19:33:58.109838 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.109892 kubelet[2640]: W0213 19:33:58.109851 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.109892 kubelet[2640]: E0213 19:33:58.109865 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.110067 kubelet[2640]: E0213 19:33:58.110054 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.110067 kubelet[2640]: W0213 19:33:58.110065 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.110125 kubelet[2640]: E0213 19:33:58.110078 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.110312 kubelet[2640]: E0213 19:33:58.110297 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.110371 kubelet[2640]: W0213 19:33:58.110310 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.110371 kubelet[2640]: E0213 19:33:58.110325 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.110572 kubelet[2640]: E0213 19:33:58.110557 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.110572 kubelet[2640]: W0213 19:33:58.110570 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.110625 kubelet[2640]: E0213 19:33:58.110584 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.110876 kubelet[2640]: E0213 19:33:58.110861 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.110876 kubelet[2640]: W0213 19:33:58.110873 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.110953 kubelet[2640]: E0213 19:33:58.110887 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.111121 kubelet[2640]: E0213 19:33:58.111104 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.111121 kubelet[2640]: W0213 19:33:58.111120 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.111201 kubelet[2640]: E0213 19:33:58.111131 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.111397 kubelet[2640]: E0213 19:33:58.111381 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:58.111397 kubelet[2640]: W0213 19:33:58.111395 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:58.111480 kubelet[2640]: E0213 19:33:58.111406 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:58.988579 kubelet[2640]: E0213 19:33:58.988506 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:33:59.043000 kubelet[2640]: I0213 19:33:59.042961 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:33:59.043439 kubelet[2640]: E0213 19:33:59.043310 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:59.101639 kubelet[2640]: E0213 19:33:59.101599 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.101639 kubelet[2640]: W0213 19:33:59.101628 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.101639 kubelet[2640]: E0213 19:33:59.101653 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.101894 kubelet[2640]: E0213 19:33:59.101878 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.101894 kubelet[2640]: W0213 19:33:59.101893 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.101961 kubelet[2640]: E0213 19:33:59.101902 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.102135 kubelet[2640]: E0213 19:33:59.102108 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.102135 kubelet[2640]: W0213 19:33:59.102122 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.102135 kubelet[2640]: E0213 19:33:59.102132 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.102420 kubelet[2640]: E0213 19:33:59.102404 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.102420 kubelet[2640]: W0213 19:33:59.102416 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.102508 kubelet[2640]: E0213 19:33:59.102427 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.102668 kubelet[2640]: E0213 19:33:59.102653 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.102668 kubelet[2640]: W0213 19:33:59.102665 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.102727 kubelet[2640]: E0213 19:33:59.102679 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.102901 kubelet[2640]: E0213 19:33:59.102886 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.102901 kubelet[2640]: W0213 19:33:59.102898 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.102972 kubelet[2640]: E0213 19:33:59.102908 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.103133 kubelet[2640]: E0213 19:33:59.103117 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.103133 kubelet[2640]: W0213 19:33:59.103129 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.103184 kubelet[2640]: E0213 19:33:59.103138 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.103392 kubelet[2640]: E0213 19:33:59.103375 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.103392 kubelet[2640]: W0213 19:33:59.103387 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.103454 kubelet[2640]: E0213 19:33:59.103397 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.103636 kubelet[2640]: E0213 19:33:59.103620 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.103636 kubelet[2640]: W0213 19:33:59.103632 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.103687 kubelet[2640]: E0213 19:33:59.103642 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.103863 kubelet[2640]: E0213 19:33:59.103849 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.103863 kubelet[2640]: W0213 19:33:59.103861 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.103915 kubelet[2640]: E0213 19:33:59.103872 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.104167 kubelet[2640]: E0213 19:33:59.104140 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.104167 kubelet[2640]: W0213 19:33:59.104153 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.104167 kubelet[2640]: E0213 19:33:59.104163 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.104423 kubelet[2640]: E0213 19:33:59.104404 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.104423 kubelet[2640]: W0213 19:33:59.104417 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.104488 kubelet[2640]: E0213 19:33:59.104427 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.104650 kubelet[2640]: E0213 19:33:59.104637 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.104650 kubelet[2640]: W0213 19:33:59.104647 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.104702 kubelet[2640]: E0213 19:33:59.104655 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.104878 kubelet[2640]: E0213 19:33:59.104863 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.104878 kubelet[2640]: W0213 19:33:59.104874 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.104930 kubelet[2640]: E0213 19:33:59.104882 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.105084 kubelet[2640]: E0213 19:33:59.105072 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.105084 kubelet[2640]: W0213 19:33:59.105081 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.105125 kubelet[2640]: E0213 19:33:59.105089 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.115732 kubelet[2640]: E0213 19:33:59.115684 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.115732 kubelet[2640]: W0213 19:33:59.115712 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.115860 kubelet[2640]: E0213 19:33:59.115739 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.116047 kubelet[2640]: E0213 19:33:59.116024 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.116047 kubelet[2640]: W0213 19:33:59.116037 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.116097 kubelet[2640]: E0213 19:33:59.116053 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.116332 kubelet[2640]: E0213 19:33:59.116312 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.116332 kubelet[2640]: W0213 19:33:59.116330 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.116428 kubelet[2640]: E0213 19:33:59.116360 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.116610 kubelet[2640]: E0213 19:33:59.116599 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.116610 kubelet[2640]: W0213 19:33:59.116609 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.116658 kubelet[2640]: E0213 19:33:59.116623 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.116856 kubelet[2640]: E0213 19:33:59.116838 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.116856 kubelet[2640]: W0213 19:33:59.116848 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.116903 kubelet[2640]: E0213 19:33:59.116860 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.117109 kubelet[2640]: E0213 19:33:59.117093 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.117109 kubelet[2640]: W0213 19:33:59.117103 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.117155 kubelet[2640]: E0213 19:33:59.117115 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.117589 kubelet[2640]: E0213 19:33:59.117563 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.117589 kubelet[2640]: W0213 19:33:59.117580 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.117654 kubelet[2640]: E0213 19:33:59.117599 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.117826 kubelet[2640]: E0213 19:33:59.117808 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.117826 kubelet[2640]: W0213 19:33:59.117820 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.117895 kubelet[2640]: E0213 19:33:59.117850 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.118031 kubelet[2640]: E0213 19:33:59.118019 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.118060 kubelet[2640]: W0213 19:33:59.118031 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.118099 kubelet[2640]: E0213 19:33:59.118070 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.118284 kubelet[2640]: E0213 19:33:59.118268 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.118284 kubelet[2640]: W0213 19:33:59.118280 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.118365 kubelet[2640]: E0213 19:33:59.118299 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.118522 kubelet[2640]: E0213 19:33:59.118510 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.118522 kubelet[2640]: W0213 19:33:59.118521 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.118584 kubelet[2640]: E0213 19:33:59.118537 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.118739 kubelet[2640]: E0213 19:33:59.118727 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.118739 kubelet[2640]: W0213 19:33:59.118737 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.118794 kubelet[2640]: E0213 19:33:59.118753 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.119062 kubelet[2640]: E0213 19:33:59.119027 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.119062 kubelet[2640]: W0213 19:33:59.119049 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.119124 kubelet[2640]: E0213 19:33:59.119075 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.119329 kubelet[2640]: E0213 19:33:59.119285 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.119329 kubelet[2640]: W0213 19:33:59.119298 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.119329 kubelet[2640]: E0213 19:33:59.119315 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.119599 kubelet[2640]: E0213 19:33:59.119579 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.119599 kubelet[2640]: W0213 19:33:59.119593 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.119687 kubelet[2640]: E0213 19:33:59.119611 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.119954 kubelet[2640]: E0213 19:33:59.119936 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.119954 kubelet[2640]: W0213 19:33:59.119949 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.120023 kubelet[2640]: E0213 19:33:59.119965 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.120208 kubelet[2640]: E0213 19:33:59.120163 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.120208 kubelet[2640]: W0213 19:33:59.120177 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.120208 kubelet[2640]: E0213 19:33:59.120204 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.120456 kubelet[2640]: E0213 19:33:59.120435 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:33:59.120456 kubelet[2640]: W0213 19:33:59.120446 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:33:59.120456 kubelet[2640]: E0213 19:33:59.120454 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:33:59.774050 containerd[1495]: time="2025-02-13T19:33:59.773981227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.775219 containerd[1495]: time="2025-02-13T19:33:59.775121360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:33:59.776463 containerd[1495]: time="2025-02-13T19:33:59.776430171Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.779050 containerd[1495]: time="2025-02-13T19:33:59.778980486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:59.779710 containerd[1495]: time="2025-02-13T19:33:59.779665854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.334774765s" Feb 13 19:33:59.779710 containerd[1495]: time="2025-02-13T19:33:59.779702864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:33:59.781906 containerd[1495]: time="2025-02-13T19:33:59.781874528Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:33:59.803170 containerd[1495]: time="2025-02-13T19:33:59.803127635Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80\"" Feb 13 19:33:59.803724 containerd[1495]: time="2025-02-13T19:33:59.803684733Z" level=info msg="StartContainer for \"0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80\"" Feb 13 19:33:59.843490 systemd[1]: Started cri-containerd-0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80.scope - libcontainer container 0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80. Feb 13 19:33:59.897649 systemd[1]: cri-containerd-0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80.scope: Deactivated successfully. Feb 13 19:34:00.311747 containerd[1495]: time="2025-02-13T19:34:00.311612303Z" level=info msg="StartContainer for \"0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80\" returns successfully" Feb 13 19:34:00.322661 kubelet[2640]: E0213 19:34:00.321573 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:00.338621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80-rootfs.mount: Deactivated successfully. Feb 13 19:34:00.343781 kubelet[2640]: I0213 19:34:00.343334 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-748559d5c4-mp6fj" podStartSLOduration=4.10964119 podStartE2EDuration="7.343303497s" podCreationTimestamp="2025-02-13 19:33:53 +0000 UTC" firstStartedPulling="2025-02-13 19:33:54.210909933 +0000 UTC m=+14.344226718" lastFinishedPulling="2025-02-13 19:33:57.44457224 +0000 UTC m=+17.577889025" observedRunningTime="2025-02-13 19:33:58.157408447 +0000 UTC m=+18.290725252" watchObservedRunningTime="2025-02-13 19:34:00.343303497 +0000 UTC m=+20.476620282" Feb 13 19:34:00.352722 containerd[1495]: time="2025-02-13T19:34:00.352628767Z" level=info msg="shim disconnected" id=0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80 namespace=k8s.io Feb 13 19:34:00.352722 containerd[1495]: time="2025-02-13T19:34:00.352708477Z" level=warning msg="cleaning up after shim disconnected" id=0f3fd12996a0647ff3507a897cf87f82e6db2877bf3787f836b3ff904195cc80 namespace=k8s.io Feb 13 19:34:00.352722 containerd[1495]: time="2025-02-13T19:34:00.352717564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:00.988866 kubelet[2640]: E0213 19:34:00.988803 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:01.323897 kubelet[2640]: E0213 19:34:01.323855 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:01.324760 containerd[1495]: time="2025-02-13T19:34:01.324664607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:34:02.988823 kubelet[2640]: E0213 19:34:02.988766 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:04.988814 kubelet[2640]: E0213 19:34:04.988748 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:05.902844 kubelet[2640]: I0213 19:34:05.902800 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:05.903370 kubelet[2640]: E0213 19:34:05.903332 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:06.334717 kubelet[2640]: E0213 19:34:06.334685 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:06.380952 containerd[1495]: time="2025-02-13T19:34:06.380877090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:06.381759 containerd[1495]: time="2025-02-13T19:34:06.381710264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:34:06.383008 containerd[1495]: time="2025-02-13T19:34:06.382946146Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:06.385401 containerd[1495]: time="2025-02-13T19:34:06.385364238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:06.386128 containerd[1495]: time="2025-02-13T19:34:06.386076947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.061335204s" Feb 13 19:34:06.386128 containerd[1495]: time="2025-02-13T19:34:06.386109057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:34:06.388681 containerd[1495]: time="2025-02-13T19:34:06.388643507Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:34:06.420651 containerd[1495]: time="2025-02-13T19:34:06.420578764Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3\"" Feb 13 19:34:06.421445 containerd[1495]: time="2025-02-13T19:34:06.421394016Z" level=info msg="StartContainer for \"ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3\"" Feb 13 19:34:06.464450 systemd[1]: Started cri-containerd-ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3.scope - libcontainer container ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3. Feb 13 19:34:06.503288 containerd[1495]: time="2025-02-13T19:34:06.503228452Z" level=info msg="StartContainer for \"ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3\" returns successfully" Feb 13 19:34:06.988143 kubelet[2640]: E0213 19:34:06.988061 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:07.342245 kubelet[2640]: E0213 19:34:07.340781 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:07.726681 containerd[1495]: time="2025-02-13T19:34:07.726537571Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:34:07.729831 systemd[1]: cri-containerd-ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3.scope: Deactivated successfully. Feb 13 19:34:07.747871 kubelet[2640]: I0213 19:34:07.747425 2640 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:34:07.754781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3-rootfs.mount: Deactivated successfully. Feb 13 19:34:07.800440 systemd[1]: Created slice kubepods-burstable-pod7d4597d2_6027_4ade_9599_11a9fb3937e8.slice - libcontainer container kubepods-burstable-pod7d4597d2_6027_4ade_9599_11a9fb3937e8.slice. Feb 13 19:34:07.806504 systemd[1]: Created slice kubepods-besteffort-pod9dd6108c_e0cd_41e5_bd6d_20be6e54c890.slice - libcontainer container kubepods-besteffort-pod9dd6108c_e0cd_41e5_bd6d_20be6e54c890.slice. Feb 13 19:34:07.812170 systemd[1]: Created slice kubepods-besteffort-podb91e83e8_b503_4a59_bbef_bf279f88f9d9.slice - libcontainer container kubepods-besteffort-podb91e83e8_b503_4a59_bbef_bf279f88f9d9.slice. Feb 13 19:34:07.817117 systemd[1]: Created slice kubepods-besteffort-pod8000c7db_76d3_42a2_88ef_9e561c300a00.slice - libcontainer container kubepods-besteffort-pod8000c7db_76d3_42a2_88ef_9e561c300a00.slice. Feb 13 19:34:07.822393 systemd[1]: Created slice kubepods-burstable-podb4f5f379_1bf2_49a8_b809_0761222a6c07.slice - libcontainer container kubepods-burstable-podb4f5f379_1bf2_49a8_b809_0761222a6c07.slice. Feb 13 19:34:07.877219 kubelet[2640]: I0213 19:34:07.877142 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9dd6108c-e0cd-41e5-bd6d-20be6e54c890-calico-apiserver-certs\") pod \"calico-apiserver-6c4d7bff6f-h9gmf\" (UID: \"9dd6108c-e0cd-41e5-bd6d-20be6e54c890\") " pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:07.877219 kubelet[2640]: I0213 19:34:07.877220 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d4597d2-6027-4ade-9599-11a9fb3937e8-config-volume\") pod \"coredns-668d6bf9bc-n72h7\" (UID: \"7d4597d2-6027-4ade-9599-11a9fb3937e8\") " pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:07.877474 kubelet[2640]: I0213 19:34:07.877253 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8000c7db-76d3-42a2-88ef-9e561c300a00-calico-apiserver-certs\") pod \"calico-apiserver-6c4d7bff6f-kmrnk\" (UID: \"8000c7db-76d3-42a2-88ef-9e561c300a00\") " pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:07.877474 kubelet[2640]: I0213 19:34:07.877282 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4f5f379-1bf2-49a8-b809-0761222a6c07-config-volume\") pod \"coredns-668d6bf9bc-822rj\" (UID: \"b4f5f379-1bf2-49a8-b809-0761222a6c07\") " pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:07.877474 kubelet[2640]: I0213 19:34:07.877304 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv8f\" (UniqueName: \"kubernetes.io/projected/b4f5f379-1bf2-49a8-b809-0761222a6c07-kube-api-access-zjv8f\") pod \"coredns-668d6bf9bc-822rj\" (UID: \"b4f5f379-1bf2-49a8-b809-0761222a6c07\") " pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:07.877474 kubelet[2640]: I0213 19:34:07.877328 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8gzz\" (UniqueName: \"kubernetes.io/projected/7d4597d2-6027-4ade-9599-11a9fb3937e8-kube-api-access-t8gzz\") pod \"coredns-668d6bf9bc-n72h7\" (UID: \"7d4597d2-6027-4ade-9599-11a9fb3937e8\") " pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:07.877474 kubelet[2640]: I0213 19:34:07.877349 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz898\" (UniqueName: \"kubernetes.io/projected/b91e83e8-b503-4a59-bbef-bf279f88f9d9-kube-api-access-fz898\") pod \"calico-kube-controllers-7758bf7464-bz5p8\" (UID: \"b91e83e8-b503-4a59-bbef-bf279f88f9d9\") " pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:07.877612 kubelet[2640]: I0213 19:34:07.877371 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnbw\" (UniqueName: \"kubernetes.io/projected/8000c7db-76d3-42a2-88ef-9e561c300a00-kube-api-access-jqnbw\") pod \"calico-apiserver-6c4d7bff6f-kmrnk\" (UID: \"8000c7db-76d3-42a2-88ef-9e561c300a00\") " pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:07.877612 kubelet[2640]: I0213 19:34:07.877389 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b91e83e8-b503-4a59-bbef-bf279f88f9d9-tigera-ca-bundle\") pod \"calico-kube-controllers-7758bf7464-bz5p8\" (UID: \"b91e83e8-b503-4a59-bbef-bf279f88f9d9\") " pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:07.877612 kubelet[2640]: I0213 19:34:07.877408 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfv72\" (UniqueName: \"kubernetes.io/projected/9dd6108c-e0cd-41e5-bd6d-20be6e54c890-kube-api-access-gfv72\") pod \"calico-apiserver-6c4d7bff6f-h9gmf\" (UID: \"9dd6108c-e0cd-41e5-bd6d-20be6e54c890\") " pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:08.216580 containerd[1495]: time="2025-02-13T19:34:08.216494598Z" level=info msg="shim disconnected" id=ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3 namespace=k8s.io Feb 13 19:34:08.216580 containerd[1495]: time="2025-02-13T19:34:08.216564439Z" level=warning msg="cleaning up after shim disconnected" id=ad6d79aab8e2f84acbb790638f4b350c75f04bf0cd08d78194a77d2483cdf0b3 namespace=k8s.io Feb 13 19:34:08.216580 containerd[1495]: time="2025-02-13T19:34:08.216576802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:08.276539 kubelet[2640]: E0213 19:34:08.276490 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.276539 kubelet[2640]: E0213 19:34:08.276542 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.277461 containerd[1495]: time="2025-02-13T19:34:08.277409827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:08.277636 containerd[1495]: time="2025-02-13T19:34:08.277480651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:08.277636 containerd[1495]: time="2025-02-13T19:34:08.277537898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:08.277850 containerd[1495]: time="2025-02-13T19:34:08.277796113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:08.278010 containerd[1495]: time="2025-02-13T19:34:08.277863360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:08.343643 kubelet[2640]: E0213 19:34:08.343607 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.344756 containerd[1495]: time="2025-02-13T19:34:08.344718635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:34:08.433071 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:35352.service - OpenSSH per-connection server daemon (10.0.0.1:35352). Feb 13 19:34:08.504048 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 35352 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:08.505852 sshd-session[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:08.511793 systemd-logind[1480]: New session 10 of user core. Feb 13 19:34:08.519321 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:34:08.690410 sshd[3491]: Connection closed by 10.0.0.1 port 35352 Feb 13 19:34:08.690906 sshd-session[3489]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:08.696020 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:35352.service: Deactivated successfully. Feb 13 19:34:08.698805 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:34:08.699687 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:34:08.700967 systemd-logind[1480]: Removed session 10. Feb 13 19:34:08.877033 containerd[1495]: time="2025-02-13T19:34:08.876970504Z" level=error msg="Failed to destroy network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.881599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be-shm.mount: Deactivated successfully. Feb 13 19:34:08.882680 containerd[1495]: time="2025-02-13T19:34:08.882148116Z" level=error msg="Failed to destroy network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.886372 containerd[1495]: time="2025-02-13T19:34:08.886338036Z" level=error msg="Failed to destroy network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.887267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde-shm.mount: Deactivated successfully. Feb 13 19:34:08.888053 containerd[1495]: time="2025-02-13T19:34:08.887814068Z" level=error msg="encountered an error cleaning up failed sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888053 containerd[1495]: time="2025-02-13T19:34:08.887920037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888053 containerd[1495]: time="2025-02-13T19:34:08.887955694Z" level=error msg="encountered an error cleaning up failed sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888053 containerd[1495]: time="2025-02-13T19:34:08.888048157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888560 kubelet[2640]: E0213 19:34:08.888512 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888560 kubelet[2640]: E0213 19:34:08.888525 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.888691 kubelet[2640]: E0213 19:34:08.888603 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:08.888691 kubelet[2640]: E0213 19:34:08.888605 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:08.888691 kubelet[2640]: E0213 19:34:08.888629 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:08.888691 kubelet[2640]: E0213 19:34:08.888635 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:08.888801 kubelet[2640]: E0213 19:34:08.888683 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:08.888801 kubelet[2640]: E0213 19:34:08.888681 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:08.889030 containerd[1495]: time="2025-02-13T19:34:08.888909796Z" level=error msg="encountered an error cleaning up failed sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.889030 containerd[1495]: time="2025-02-13T19:34:08.888961393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.889099 kubelet[2640]: E0213 19:34:08.889081 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.889129 kubelet[2640]: E0213 19:34:08.889110 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:08.889156 kubelet[2640]: E0213 19:34:08.889128 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:08.889209 kubelet[2640]: E0213 19:34:08.889155 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:08.892646 containerd[1495]: time="2025-02-13T19:34:08.892587282Z" level=error msg="Failed to destroy network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893111 containerd[1495]: time="2025-02-13T19:34:08.892982574Z" level=error msg="Failed to destroy network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893383 containerd[1495]: time="2025-02-13T19:34:08.893179494Z" level=error msg="encountered an error cleaning up failed sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893383 containerd[1495]: time="2025-02-13T19:34:08.893279412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893580 kubelet[2640]: E0213 19:34:08.893501 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893644 kubelet[2640]: E0213 19:34:08.893598 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:08.893644 kubelet[2640]: E0213 19:34:08.893636 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:08.893702 containerd[1495]: time="2025-02-13T19:34:08.893577492Z" level=error msg="encountered an error cleaning up failed sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893702 containerd[1495]: time="2025-02-13T19:34:08.893623097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893758 kubelet[2640]: E0213 19:34:08.893724 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:08.893862 kubelet[2640]: E0213 19:34:08.893823 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:08.893909 kubelet[2640]: E0213 19:34:08.893891 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:08.893935 kubelet[2640]: E0213 19:34:08.893917 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:08.893997 kubelet[2640]: E0213 19:34:08.893969 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:08.994583 systemd[1]: Created slice kubepods-besteffort-pod1b3660a1_47a7_4062_b8e4_0e63486cf899.slice - libcontainer container kubepods-besteffort-pod1b3660a1_47a7_4062_b8e4_0e63486cf899.slice. Feb 13 19:34:08.997037 containerd[1495]: time="2025-02-13T19:34:08.996991375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:09.057538 containerd[1495]: time="2025-02-13T19:34:09.057479603Z" level=error msg="Failed to destroy network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:09.057919 containerd[1495]: time="2025-02-13T19:34:09.057887158Z" level=error msg="encountered an error cleaning up failed sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:09.057979 containerd[1495]: time="2025-02-13T19:34:09.057955537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:09.058310 kubelet[2640]: E0213 19:34:09.058270 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:09.058369 kubelet[2640]: E0213 19:34:09.058338 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:09.058369 kubelet[2640]: E0213 19:34:09.058358 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:09.058441 kubelet[2640]: E0213 19:34:09.058404 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:09.345763 kubelet[2640]: I0213 19:34:09.345726 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3" Feb 13 19:34:09.346467 containerd[1495]: time="2025-02-13T19:34:09.346379885Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:09.346638 containerd[1495]: time="2025-02-13T19:34:09.346619545Z" level=info msg="Ensure that sandbox fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3 in task-service has been cleanup successfully" Feb 13 19:34:09.346839 containerd[1495]: time="2025-02-13T19:34:09.346820152Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:09.346919 containerd[1495]: time="2025-02-13T19:34:09.346836002Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:09.347060 kubelet[2640]: I0213 19:34:09.347034 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde" Feb 13 19:34:09.347554 containerd[1495]: time="2025-02-13T19:34:09.347531288Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:09.347843 containerd[1495]: time="2025-02-13T19:34:09.347714221Z" level=info msg="Ensure that sandbox fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde in task-service has been cleanup successfully" Feb 13 19:34:09.347993 containerd[1495]: time="2025-02-13T19:34:09.347957438Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:09.347993 containerd[1495]: time="2025-02-13T19:34:09.347976874Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:09.348480 kubelet[2640]: I0213 19:34:09.348457 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e" Feb 13 19:34:09.349385 containerd[1495]: time="2025-02-13T19:34:09.348873508Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:09.349385 containerd[1495]: time="2025-02-13T19:34:09.349058055Z" level=info msg="Ensure that sandbox 2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e in task-service has been cleanup successfully" Feb 13 19:34:09.349385 containerd[1495]: time="2025-02-13T19:34:09.349270293Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:09.349385 containerd[1495]: time="2025-02-13T19:34:09.349285492Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:09.349748 kubelet[2640]: I0213 19:34:09.349720 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be" Feb 13 19:34:09.350342 containerd[1495]: time="2025-02-13T19:34:09.350100622Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:09.350342 containerd[1495]: time="2025-02-13T19:34:09.350306549Z" level=info msg="Ensure that sandbox 37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be in task-service has been cleanup successfully" Feb 13 19:34:09.350576 containerd[1495]: time="2025-02-13T19:34:09.350477770Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:09.350576 containerd[1495]: time="2025-02-13T19:34:09.350492398Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:09.350663 kubelet[2640]: E0213 19:34:09.350375 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.350663 kubelet[2640]: E0213 19:34:09.350459 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.350732 containerd[1495]: time="2025-02-13T19:34:09.350708534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:09.350762 containerd[1495]: time="2025-02-13T19:34:09.350732068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:09.351029 containerd[1495]: time="2025-02-13T19:34:09.351005381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:09.351649 kubelet[2640]: I0213 19:34:09.351621 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63" Feb 13 19:34:09.352085 containerd[1495]: time="2025-02-13T19:34:09.352062847Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:09.352308 containerd[1495]: time="2025-02-13T19:34:09.352286858Z" level=info msg="Ensure that sandbox 3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63 in task-service has been cleanup successfully" Feb 13 19:34:09.352470 containerd[1495]: time="2025-02-13T19:34:09.352447630Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:09.352525 containerd[1495]: time="2025-02-13T19:34:09.352469390Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:09.352667 kubelet[2640]: I0213 19:34:09.352646 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd" Feb 13 19:34:09.352822 containerd[1495]: time="2025-02-13T19:34:09.352800733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:09.353020 containerd[1495]: time="2025-02-13T19:34:09.352993865Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:09.353169 containerd[1495]: time="2025-02-13T19:34:09.353143156Z" level=info msg="Ensure that sandbox 274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd in task-service has been cleanup successfully" Feb 13 19:34:09.353340 containerd[1495]: time="2025-02-13T19:34:09.353322372Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:09.353370 containerd[1495]: time="2025-02-13T19:34:09.353339234Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:09.353728 containerd[1495]: time="2025-02-13T19:34:09.353700172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:09.358494 containerd[1495]: time="2025-02-13T19:34:09.358463196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:09.755634 systemd[1]: run-netns-cni\x2d04df31b5\x2dd763\x2ddc52\x2d127f\x2d2174a3f65f28.mount: Deactivated successfully. Feb 13 19:34:09.755740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd-shm.mount: Deactivated successfully. Feb 13 19:34:09.755836 systemd[1]: run-netns-cni\x2dd55c23ea\x2d6d79\x2dc7f4\x2d84b7\x2d28f53cc44e9b.mount: Deactivated successfully. Feb 13 19:34:09.755928 systemd[1]: run-netns-cni\x2d2bab924e\x2d00e5\x2d07f0\x2d8362\x2d83b2d18846a6.mount: Deactivated successfully. Feb 13 19:34:09.756018 systemd[1]: run-netns-cni\x2d57d27317\x2def8c\x2df2b8\x2d3710\x2d7a09fc99e1c6.mount: Deactivated successfully. Feb 13 19:34:09.756114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e-shm.mount: Deactivated successfully. Feb 13 19:34:09.756243 systemd[1]: run-netns-cni\x2d229fc62d\x2d485e\x2d7e62\x2d045d\x2d0b798f335a29.mount: Deactivated successfully. Feb 13 19:34:09.756362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3-shm.mount: Deactivated successfully. Feb 13 19:34:10.404406 containerd[1495]: time="2025-02-13T19:34:10.404337791Z" level=error msg="Failed to destroy network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.405812 containerd[1495]: time="2025-02-13T19:34:10.405776522Z" level=error msg="encountered an error cleaning up failed sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.405909 containerd[1495]: time="2025-02-13T19:34:10.405875568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.406575 kubelet[2640]: E0213 19:34:10.406517 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.406871 kubelet[2640]: E0213 19:34:10.406605 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:10.406871 kubelet[2640]: E0213 19:34:10.406637 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:10.407870 kubelet[2640]: E0213 19:34:10.406702 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:10.409788 containerd[1495]: time="2025-02-13T19:34:10.409725968Z" level=error msg="Failed to destroy network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.410253 containerd[1495]: time="2025-02-13T19:34:10.410213563Z" level=error msg="encountered an error cleaning up failed sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.410313 containerd[1495]: time="2025-02-13T19:34:10.410292633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.410732 kubelet[2640]: E0213 19:34:10.410577 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.410822 kubelet[2640]: E0213 19:34:10.410788 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:10.410878 kubelet[2640]: E0213 19:34:10.410825 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:10.411053 kubelet[2640]: E0213 19:34:10.410985 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:10.436682 containerd[1495]: time="2025-02-13T19:34:10.436602734Z" level=error msg="Failed to destroy network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.437265 containerd[1495]: time="2025-02-13T19:34:10.437244560Z" level=error msg="encountered an error cleaning up failed sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.437417 containerd[1495]: time="2025-02-13T19:34:10.437388089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.437785 kubelet[2640]: E0213 19:34:10.437746 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.437938 kubelet[2640]: E0213 19:34:10.437902 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:10.438021 kubelet[2640]: E0213 19:34:10.438005 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:10.438120 kubelet[2640]: E0213 19:34:10.438098 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:10.442052 containerd[1495]: time="2025-02-13T19:34:10.441980873Z" level=error msg="Failed to destroy network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.442512 containerd[1495]: time="2025-02-13T19:34:10.442478127Z" level=error msg="encountered an error cleaning up failed sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.442568 containerd[1495]: time="2025-02-13T19:34:10.442551204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.442733 kubelet[2640]: E0213 19:34:10.442712 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.442990 kubelet[2640]: E0213 19:34:10.442807 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:10.442990 kubelet[2640]: E0213 19:34:10.442827 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:10.442990 kubelet[2640]: E0213 19:34:10.442862 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:10.445919 containerd[1495]: time="2025-02-13T19:34:10.445872170Z" level=error msg="Failed to destroy network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.446360 containerd[1495]: time="2025-02-13T19:34:10.446322125Z" level=error msg="encountered an error cleaning up failed sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.446418 containerd[1495]: time="2025-02-13T19:34:10.446392687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.446589 kubelet[2640]: E0213 19:34:10.446555 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.446589 kubelet[2640]: E0213 19:34:10.446584 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:10.446589 kubelet[2640]: E0213 19:34:10.446607 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:10.446840 kubelet[2640]: E0213 19:34:10.446638 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:10.448313 containerd[1495]: time="2025-02-13T19:34:10.448266296Z" level=error msg="Failed to destroy network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.448745 containerd[1495]: time="2025-02-13T19:34:10.448704949Z" level=error msg="encountered an error cleaning up failed sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.448801 containerd[1495]: time="2025-02-13T19:34:10.448762668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.449048 kubelet[2640]: E0213 19:34:10.449002 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:10.449100 kubelet[2640]: E0213 19:34:10.449077 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:10.449141 kubelet[2640]: E0213 19:34:10.449103 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:10.449184 kubelet[2640]: E0213 19:34:10.449161 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:10.756092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391-shm.mount: Deactivated successfully. Feb 13 19:34:10.756216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4-shm.mount: Deactivated successfully. Feb 13 19:34:10.756294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01-shm.mount: Deactivated successfully. Feb 13 19:34:10.756391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df-shm.mount: Deactivated successfully. Feb 13 19:34:10.756483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0-shm.mount: Deactivated successfully. Feb 13 19:34:10.756583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead-shm.mount: Deactivated successfully. Feb 13 19:34:11.359399 kubelet[2640]: I0213 19:34:11.359359 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df" Feb 13 19:34:11.359948 containerd[1495]: time="2025-02-13T19:34:11.359917928Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:11.360186 containerd[1495]: time="2025-02-13T19:34:11.360150464Z" level=info msg="Ensure that sandbox 308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df in task-service has been cleanup successfully" Feb 13 19:34:11.360577 containerd[1495]: time="2025-02-13T19:34:11.360545807Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:11.360577 containerd[1495]: time="2025-02-13T19:34:11.360570142Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:11.361385 kubelet[2640]: I0213 19:34:11.361341 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01" Feb 13 19:34:11.361813 containerd[1495]: time="2025-02-13T19:34:11.361768232Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:11.362066 containerd[1495]: time="2025-02-13T19:34:11.362033551Z" level=info msg="Ensure that sandbox 1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01 in task-service has been cleanup successfully" Feb 13 19:34:11.362500 containerd[1495]: time="2025-02-13T19:34:11.362413663Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:11.362500 containerd[1495]: time="2025-02-13T19:34:11.362437188Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:11.362751 containerd[1495]: time="2025-02-13T19:34:11.362731901Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:11.362972 containerd[1495]: time="2025-02-13T19:34:11.362894516Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:11.362972 containerd[1495]: time="2025-02-13T19:34:11.362908934Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:11.363069 kubelet[2640]: I0213 19:34:11.362992 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0" Feb 13 19:34:11.364032 containerd[1495]: time="2025-02-13T19:34:11.363693978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:11.364032 containerd[1495]: time="2025-02-13T19:34:11.363762536Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:11.364032 containerd[1495]: time="2025-02-13T19:34:11.363916386Z" level=info msg="Ensure that sandbox ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0 in task-service has been cleanup successfully" Feb 13 19:34:11.364448 containerd[1495]: time="2025-02-13T19:34:11.364430030Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:11.364515 containerd[1495]: time="2025-02-13T19:34:11.364502977Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:11.364866 containerd[1495]: time="2025-02-13T19:34:11.364732598Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:11.364866 containerd[1495]: time="2025-02-13T19:34:11.364807799Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:11.364866 containerd[1495]: time="2025-02-13T19:34:11.364816345Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:11.365588 containerd[1495]: time="2025-02-13T19:34:11.365384782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:11.365507 systemd[1]: run-netns-cni\x2dec8f1cd0\x2dc896\x2d1128\x2db00b\x2da1fbf897847b.mount: Deactivated successfully. Feb 13 19:34:11.366385 kubelet[2640]: I0213 19:34:11.366360 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead" Feb 13 19:34:11.368403 containerd[1495]: time="2025-02-13T19:34:11.368362974Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:11.368558 containerd[1495]: time="2025-02-13T19:34:11.368507566Z" level=info msg="Ensure that sandbox d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead in task-service has been cleanup successfully" Feb 13 19:34:11.369273 containerd[1495]: time="2025-02-13T19:34:11.368673046Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:11.369273 containerd[1495]: time="2025-02-13T19:34:11.368689026Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:11.369273 containerd[1495]: time="2025-02-13T19:34:11.369038072Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:11.369273 containerd[1495]: time="2025-02-13T19:34:11.369163326Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:11.369273 containerd[1495]: time="2025-02-13T19:34:11.369177954Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:11.369143 systemd[1]: run-netns-cni\x2d24a98a50\x2db385\x2ddea4\x2d76a7\x2ddc68153fd48a.mount: Deactivated successfully. Feb 13 19:34:11.369517 kubelet[2640]: I0213 19:34:11.369395 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4" Feb 13 19:34:11.369288 systemd[1]: run-netns-cni\x2dcaf25f57\x2d1e14\x2d989c\x2da6b2\x2d85d5c1d125b2.mount: Deactivated successfully. Feb 13 19:34:11.370464 kubelet[2640]: E0213 19:34:11.369845 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:11.370539 containerd[1495]: time="2025-02-13T19:34:11.370170066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:11.370539 containerd[1495]: time="2025-02-13T19:34:11.370185186Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:11.370539 containerd[1495]: time="2025-02-13T19:34:11.370336910Z" level=info msg="Ensure that sandbox 3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4 in task-service has been cleanup successfully" Feb 13 19:34:11.370645 containerd[1495]: time="2025-02-13T19:34:11.370564126Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:11.370645 containerd[1495]: time="2025-02-13T19:34:11.370595956Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:11.371393 containerd[1495]: time="2025-02-13T19:34:11.371362866Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:11.371563 containerd[1495]: time="2025-02-13T19:34:11.371468575Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:11.371563 containerd[1495]: time="2025-02-13T19:34:11.371487270Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:11.372221 containerd[1495]: time="2025-02-13T19:34:11.372028266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:11.372620 kubelet[2640]: I0213 19:34:11.372596 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391" Feb 13 19:34:11.373130 containerd[1495]: time="2025-02-13T19:34:11.373064482Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:11.373377 containerd[1495]: time="2025-02-13T19:34:11.373344817Z" level=info msg="Ensure that sandbox 8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391 in task-service has been cleanup successfully" Feb 13 19:34:11.373582 containerd[1495]: time="2025-02-13T19:34:11.373560593Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:11.373606 containerd[1495]: time="2025-02-13T19:34:11.373582123Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:11.373890 systemd[1]: run-netns-cni\x2d6d9344c3\x2d53ba\x2d613e\x2d0195\x2dc0c8034151c4.mount: Deactivated successfully. Feb 13 19:34:11.374014 systemd[1]: run-netns-cni\x2d6e0cfc4e\x2d6e9c\x2d3a6c\x2d9596\x2df607d61cff71.mount: Deactivated successfully. Feb 13 19:34:11.374078 containerd[1495]: time="2025-02-13T19:34:11.374017691Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:11.374274 containerd[1495]: time="2025-02-13T19:34:11.374240309Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:11.374274 containerd[1495]: time="2025-02-13T19:34:11.374265657Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:11.374781 kubelet[2640]: E0213 19:34:11.374655 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:11.375024 containerd[1495]: time="2025-02-13T19:34:11.374993894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:11.377062 systemd[1]: run-netns-cni\x2dfa61c49d\x2db534\x2d98bc\x2d8e39\x2d56f9fcbe1a0c.mount: Deactivated successfully. Feb 13 19:34:11.835074 containerd[1495]: time="2025-02-13T19:34:11.835018134Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:11.835516 containerd[1495]: time="2025-02-13T19:34:11.835175729Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:11.835516 containerd[1495]: time="2025-02-13T19:34:11.835208410Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:11.835953 containerd[1495]: time="2025-02-13T19:34:11.835908204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:12.106060 containerd[1495]: time="2025-02-13T19:34:12.105917065Z" level=error msg="Failed to destroy network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.108070 containerd[1495]: time="2025-02-13T19:34:12.107060872Z" level=error msg="encountered an error cleaning up failed sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.108070 containerd[1495]: time="2025-02-13T19:34:12.107140372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.108262 kubelet[2640]: E0213 19:34:12.107461 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.109200 kubelet[2640]: E0213 19:34:12.109157 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:12.109316 kubelet[2640]: E0213 19:34:12.109272 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:12.116529 kubelet[2640]: E0213 19:34:12.116430 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:12.121529 containerd[1495]: time="2025-02-13T19:34:12.121468120Z" level=error msg="Failed to destroy network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.126593 containerd[1495]: time="2025-02-13T19:34:12.126505197Z" level=error msg="encountered an error cleaning up failed sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.126737 containerd[1495]: time="2025-02-13T19:34:12.126676699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.127327 kubelet[2640]: E0213 19:34:12.127068 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.127327 kubelet[2640]: E0213 19:34:12.127164 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:12.127327 kubelet[2640]: E0213 19:34:12.127202 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:12.127447 kubelet[2640]: E0213 19:34:12.127306 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:12.135951 containerd[1495]: time="2025-02-13T19:34:12.135898502Z" level=error msg="Failed to destroy network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.136561 containerd[1495]: time="2025-02-13T19:34:12.136537241Z" level=error msg="encountered an error cleaning up failed sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.136627 containerd[1495]: time="2025-02-13T19:34:12.136607744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.136967 kubelet[2640]: E0213 19:34:12.136915 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.137033 kubelet[2640]: E0213 19:34:12.136988 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:12.137033 kubelet[2640]: E0213 19:34:12.137015 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:12.137092 kubelet[2640]: E0213 19:34:12.137060 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:12.141392 containerd[1495]: time="2025-02-13T19:34:12.140863924Z" level=error msg="Failed to destroy network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.141528 containerd[1495]: time="2025-02-13T19:34:12.141466325Z" level=error msg="encountered an error cleaning up failed sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.141577 containerd[1495]: time="2025-02-13T19:34:12.141548449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.141865 kubelet[2640]: E0213 19:34:12.141826 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.141922 kubelet[2640]: E0213 19:34:12.141893 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:12.141946 kubelet[2640]: E0213 19:34:12.141923 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:12.142002 kubelet[2640]: E0213 19:34:12.141966 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:12.147959 containerd[1495]: time="2025-02-13T19:34:12.147906577Z" level=error msg="Failed to destroy network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.148600 containerd[1495]: time="2025-02-13T19:34:12.148575222Z" level=error msg="encountered an error cleaning up failed sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.148660 containerd[1495]: time="2025-02-13T19:34:12.148637308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.148893 kubelet[2640]: E0213 19:34:12.148849 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.148958 kubelet[2640]: E0213 19:34:12.148917 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:12.148958 kubelet[2640]: E0213 19:34:12.148943 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:12.149018 kubelet[2640]: E0213 19:34:12.148991 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:12.149287 containerd[1495]: time="2025-02-13T19:34:12.149137277Z" level=error msg="Failed to destroy network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.149648 containerd[1495]: time="2025-02-13T19:34:12.149624251Z" level=error msg="encountered an error cleaning up failed sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.149738 containerd[1495]: time="2025-02-13T19:34:12.149695715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.149917 kubelet[2640]: E0213 19:34:12.149848 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.149917 kubelet[2640]: E0213 19:34:12.149878 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:12.149917 kubelet[2640]: E0213 19:34:12.149897 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:12.150024 kubelet[2640]: E0213 19:34:12.149929 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:12.413689 kubelet[2640]: I0213 19:34:12.413445 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657" Feb 13 19:34:12.418419 kubelet[2640]: I0213 19:34:12.417913 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947" Feb 13 19:34:12.419718 kubelet[2640]: I0213 19:34:12.419689 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de" Feb 13 19:34:12.421404 kubelet[2640]: I0213 19:34:12.421379 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a" Feb 13 19:34:12.445802 containerd[1495]: time="2025-02-13T19:34:12.445743319Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:12.446267 containerd[1495]: time="2025-02-13T19:34:12.446238388Z" level=info msg="Ensure that sandbox 2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947 in task-service has been cleanup successfully" Feb 13 19:34:12.446448 containerd[1495]: time="2025-02-13T19:34:12.446429418Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:12.446475 containerd[1495]: time="2025-02-13T19:34:12.446446570Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:12.446523 containerd[1495]: time="2025-02-13T19:34:12.446502635Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:12.446619 containerd[1495]: time="2025-02-13T19:34:12.446595910Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:12.446788 containerd[1495]: time="2025-02-13T19:34:12.446767181Z" level=info msg="Ensure that sandbox fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de in task-service has been cleanup successfully" Feb 13 19:34:12.446847 containerd[1495]: time="2025-02-13T19:34:12.446802898Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:12.447145 containerd[1495]: time="2025-02-13T19:34:12.446970052Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:12.447145 containerd[1495]: time="2025-02-13T19:34:12.447138098Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:12.447222 containerd[1495]: time="2025-02-13T19:34:12.447160139Z" level=info msg="Ensure that sandbox cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657 in task-service has been cleanup successfully" Feb 13 19:34:12.447406 containerd[1495]: time="2025-02-13T19:34:12.447061173Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:12.447469 containerd[1495]: time="2025-02-13T19:34:12.447453931Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:12.447469 containerd[1495]: time="2025-02-13T19:34:12.447467476Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:12.447534 containerd[1495]: time="2025-02-13T19:34:12.446778422Z" level=info msg="Ensure that sandbox 0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a in task-service has been cleanup successfully" Feb 13 19:34:12.447620 containerd[1495]: time="2025-02-13T19:34:12.447587351Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:12.447620 containerd[1495]: time="2025-02-13T19:34:12.447600526Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:12.447827 containerd[1495]: time="2025-02-13T19:34:12.447809287Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:12.447827 containerd[1495]: time="2025-02-13T19:34:12.447823644Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:12.448161 kubelet[2640]: I0213 19:34:12.448125 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00" Feb 13 19:34:12.448522 containerd[1495]: time="2025-02-13T19:34:12.448498853Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:12.448593 containerd[1495]: time="2025-02-13T19:34:12.448571970Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:12.448593 containerd[1495]: time="2025-02-13T19:34:12.448581778Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:12.448711 containerd[1495]: time="2025-02-13T19:34:12.448691444Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448816188Z" level=info msg="Ensure that sandbox b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00 in task-service has been cleanup successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448865610Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448959537Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448977401Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448989083Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449018278Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449026112Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.448991457Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449085894Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449107254Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449122804Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:12.449161 containerd[1495]: time="2025-02-13T19:34:12.449143913Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449235455Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449314353Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449371991Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449379867Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449412448Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449466700Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:12.449559 containerd[1495]: time="2025-02-13T19:34:12.449474384Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:12.449750 kubelet[2640]: E0213 19:34:12.449324 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:12.449750 kubelet[2640]: E0213 19:34:12.449618 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:12.449818 containerd[1495]: time="2025-02-13T19:34:12.449586945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:12.449970 containerd[1495]: time="2025-02-13T19:34:12.449945999Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:12.450013 containerd[1495]: time="2025-02-13T19:34:12.449994601Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:12.450044 containerd[1495]: time="2025-02-13T19:34:12.450000181Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:12.450073 containerd[1495]: time="2025-02-13T19:34:12.450044093Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:12.450137 containerd[1495]: time="2025-02-13T19:34:12.450118182Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:12.450137 containerd[1495]: time="2025-02-13T19:34:12.450131968Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:12.450291 containerd[1495]: time="2025-02-13T19:34:12.449973331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:12.450514 containerd[1495]: time="2025-02-13T19:34:12.450134503Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:12.450514 containerd[1495]: time="2025-02-13T19:34:12.450396335Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:12.450678 containerd[1495]: time="2025-02-13T19:34:12.450650592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:12.451466 containerd[1495]: time="2025-02-13T19:34:12.451382947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:12.451466 containerd[1495]: time="2025-02-13T19:34:12.451417522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:12.452262 kubelet[2640]: I0213 19:34:12.452186 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99" Feb 13 19:34:12.453122 containerd[1495]: time="2025-02-13T19:34:12.452943517Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:12.453122 containerd[1495]: time="2025-02-13T19:34:12.453102265Z" level=info msg="Ensure that sandbox 6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99 in task-service has been cleanup successfully" Feb 13 19:34:12.453625 containerd[1495]: time="2025-02-13T19:34:12.453589640Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:12.453625 containerd[1495]: time="2025-02-13T19:34:12.453609988Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:12.453920 containerd[1495]: time="2025-02-13T19:34:12.453877621Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:12.454137 containerd[1495]: time="2025-02-13T19:34:12.454055795Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:12.454137 containerd[1495]: time="2025-02-13T19:34:12.454079469Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:12.454444 containerd[1495]: time="2025-02-13T19:34:12.454417324Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:12.454540 containerd[1495]: time="2025-02-13T19:34:12.454515768Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:12.454579 containerd[1495]: time="2025-02-13T19:34:12.454536918Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:12.460120 containerd[1495]: time="2025-02-13T19:34:12.459258883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:12.697330 containerd[1495]: time="2025-02-13T19:34:12.697098607Z" level=error msg="Failed to destroy network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.698930 containerd[1495]: time="2025-02-13T19:34:12.697612271Z" level=error msg="encountered an error cleaning up failed sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.698930 containerd[1495]: time="2025-02-13T19:34:12.697676061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.699225 kubelet[2640]: E0213 19:34:12.697962 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.699225 kubelet[2640]: E0213 19:34:12.698152 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:12.699225 kubelet[2640]: E0213 19:34:12.698181 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:12.699412 kubelet[2640]: E0213 19:34:12.698837 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:12.701765 containerd[1495]: time="2025-02-13T19:34:12.700721308Z" level=error msg="Failed to destroy network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.701765 containerd[1495]: time="2025-02-13T19:34:12.701327977Z" level=error msg="encountered an error cleaning up failed sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.717477 containerd[1495]: time="2025-02-13T19:34:12.717419638Z" level=error msg="Failed to destroy network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.717984 containerd[1495]: time="2025-02-13T19:34:12.717787738Z" level=error msg="Failed to destroy network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.720458 containerd[1495]: time="2025-02-13T19:34:12.720414811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.720635 containerd[1495]: time="2025-02-13T19:34:12.720610899Z" level=error msg="Failed to destroy network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.723900 kubelet[2640]: E0213 19:34:12.723841 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.723974 kubelet[2640]: E0213 19:34:12.723925 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:12.723974 kubelet[2640]: E0213 19:34:12.723953 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:12.724049 kubelet[2640]: E0213 19:34:12.724005 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:12.736403 containerd[1495]: time="2025-02-13T19:34:12.736328187Z" level=error msg="encountered an error cleaning up failed sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737386 containerd[1495]: time="2025-02-13T19:34:12.736457209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737386 containerd[1495]: time="2025-02-13T19:34:12.736356009Z" level=error msg="encountered an error cleaning up failed sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737386 containerd[1495]: time="2025-02-13T19:34:12.736596921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737386 containerd[1495]: time="2025-02-13T19:34:12.736900401Z" level=error msg="encountered an error cleaning up failed sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737386 containerd[1495]: time="2025-02-13T19:34:12.736963460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737768 kubelet[2640]: E0213 19:34:12.736809 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737768 kubelet[2640]: E0213 19:34:12.736893 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:12.737768 kubelet[2640]: E0213 19:34:12.736919 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:12.737974 kubelet[2640]: E0213 19:34:12.736970 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:12.737974 kubelet[2640]: E0213 19:34:12.737066 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.737974 kubelet[2640]: E0213 19:34:12.737306 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:12.738125 kubelet[2640]: E0213 19:34:12.737334 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:12.738125 kubelet[2640]: E0213 19:34:12.737383 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:12.738331 kubelet[2640]: E0213 19:34:12.738296 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.738596 kubelet[2640]: E0213 19:34:12.738386 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:12.738596 kubelet[2640]: E0213 19:34:12.738404 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:12.738596 kubelet[2640]: E0213 19:34:12.738439 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:12.739105 containerd[1495]: time="2025-02-13T19:34:12.738931985Z" level=error msg="Failed to destroy network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.740183 containerd[1495]: time="2025-02-13T19:34:12.740011061Z" level=error msg="encountered an error cleaning up failed sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.740183 containerd[1495]: time="2025-02-13T19:34:12.740060013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.740636 kubelet[2640]: E0213 19:34:12.740284 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:12.740636 kubelet[2640]: E0213 19:34:12.740334 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:12.740636 kubelet[2640]: E0213 19:34:12.740358 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:12.740771 kubelet[2640]: E0213 19:34:12.740398 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:12.918843 systemd[1]: run-netns-cni\x2de0fea54d\x2dfa15\x2dc048\x2da09e\x2d470f68256788.mount: Deactivated successfully. Feb 13 19:34:12.919617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657-shm.mount: Deactivated successfully. Feb 13 19:34:12.919714 systemd[1]: run-netns-cni\x2d75d6c3ea\x2dbdfd\x2d2bf2\x2ded41\x2d3cd0ce7af583.mount: Deactivated successfully. Feb 13 19:34:12.919789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99-shm.mount: Deactivated successfully. Feb 13 19:34:12.919873 systemd[1]: run-netns-cni\x2d749cd4fb\x2dc0ca\x2d297d\x2d9823\x2dea453fb05a25.mount: Deactivated successfully. Feb 13 19:34:12.919949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de-shm.mount: Deactivated successfully. Feb 13 19:34:12.920032 systemd[1]: run-netns-cni\x2dfbfe93ab\x2d6774\x2d92ef\x2d25d5\x2dac2f6bd2fd3e.mount: Deactivated successfully. Feb 13 19:34:12.920119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947-shm.mount: Deactivated successfully. Feb 13 19:34:12.920218 systemd[1]: run-netns-cni\x2de26a98f3\x2d7a31\x2dc359\x2d8b2c\x2d627b9cec3977.mount: Deactivated successfully. Feb 13 19:34:12.920294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00-shm.mount: Deactivated successfully. Feb 13 19:34:13.457418 kubelet[2640]: I0213 19:34:13.457364 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b" Feb 13 19:34:13.458235 containerd[1495]: time="2025-02-13T19:34:13.458183880Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:13.458682 containerd[1495]: time="2025-02-13T19:34:13.458422016Z" level=info msg="Ensure that sandbox f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b in task-service has been cleanup successfully" Feb 13 19:34:13.459941 containerd[1495]: time="2025-02-13T19:34:13.459594808Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:13.459941 containerd[1495]: time="2025-02-13T19:34:13.459612781Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:13.460047 kubelet[2640]: I0213 19:34:13.459692 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d" Feb 13 19:34:13.460265 containerd[1495]: time="2025-02-13T19:34:13.460232686Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:13.460359 containerd[1495]: time="2025-02-13T19:34:13.460342281Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:13.460403 containerd[1495]: time="2025-02-13T19:34:13.460357931Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:13.460525 containerd[1495]: time="2025-02-13T19:34:13.460482484Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:13.460696 containerd[1495]: time="2025-02-13T19:34:13.460675207Z" level=info msg="Ensure that sandbox 8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d in task-service has been cleanup successfully" Feb 13 19:34:13.461116 containerd[1495]: time="2025-02-13T19:34:13.461097138Z" level=info msg="TearDown network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" successfully" Feb 13 19:34:13.461116 containerd[1495]: time="2025-02-13T19:34:13.461111515Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" returns successfully" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462497637Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462536781Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462579932Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462590351Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462617502Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:13.462728 containerd[1495]: time="2025-02-13T19:34:13.462627381Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:13.463040 containerd[1495]: time="2025-02-13T19:34:13.462989821Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:13.463097 containerd[1495]: time="2025-02-13T19:34:13.463069721Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:13.463097 containerd[1495]: time="2025-02-13T19:34:13.463089518Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:13.463226 containerd[1495]: time="2025-02-13T19:34:13.463168056Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:13.463280 containerd[1495]: time="2025-02-13T19:34:13.463248326Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:13.463280 containerd[1495]: time="2025-02-13T19:34:13.463257333Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:13.463581 kubelet[2640]: E0213 19:34:13.463513 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:13.463928 containerd[1495]: time="2025-02-13T19:34:13.463877097Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:13.463975 containerd[1495]: time="2025-02-13T19:34:13.463947620Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:13.463975 containerd[1495]: time="2025-02-13T19:34:13.463957578Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:13.464039 containerd[1495]: time="2025-02-13T19:34:13.464020576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:13.464044 systemd[1]: run-netns-cni\x2d44f2b4ee\x2d34cb\x2d359f\x2d19d5\x2d2fe4a727d86a.mount: Deactivated successfully. Feb 13 19:34:13.464211 systemd[1]: run-netns-cni\x2dd3f5bbc6\x2d269a\x2d1379\x2dafb8\x2df8961e82a8ff.mount: Deactivated successfully. Feb 13 19:34:13.465879 containerd[1495]: time="2025-02-13T19:34:13.465768327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:13.466816 kubelet[2640]: I0213 19:34:13.466755 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97" Feb 13 19:34:13.467361 containerd[1495]: time="2025-02-13T19:34:13.467341090Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:13.467896 containerd[1495]: time="2025-02-13T19:34:13.467820289Z" level=info msg="Ensure that sandbox 3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97 in task-service has been cleanup successfully" Feb 13 19:34:13.468119 containerd[1495]: time="2025-02-13T19:34:13.468059469Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:13.468158 containerd[1495]: time="2025-02-13T19:34:13.468126925Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:13.468645 containerd[1495]: time="2025-02-13T19:34:13.468593211Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:13.468860 containerd[1495]: time="2025-02-13T19:34:13.468833411Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:13.468860 containerd[1495]: time="2025-02-13T19:34:13.468854090Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:13.470582 containerd[1495]: time="2025-02-13T19:34:13.470564662Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:13.470783 containerd[1495]: time="2025-02-13T19:34:13.470768344Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:13.470932 containerd[1495]: time="2025-02-13T19:34:13.470829318Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:13.471214 containerd[1495]: time="2025-02-13T19:34:13.471184986Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:13.471568 containerd[1495]: time="2025-02-13T19:34:13.471552085Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:13.471767 containerd[1495]: time="2025-02-13T19:34:13.471740018Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:13.472167 kubelet[2640]: I0213 19:34:13.471873 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863" Feb 13 19:34:13.472167 kubelet[2640]: E0213 19:34:13.472037 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:13.472453 containerd[1495]: time="2025-02-13T19:34:13.472432047Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:13.472667 containerd[1495]: time="2025-02-13T19:34:13.472647672Z" level=info msg="Ensure that sandbox 70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863 in task-service has been cleanup successfully" Feb 13 19:34:13.473022 containerd[1495]: time="2025-02-13T19:34:13.472932937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:13.473272 systemd[1]: run-netns-cni\x2d61bb3098\x2daad0\x2de0cc\x2d23b2\x2d1d77c6915232.mount: Deactivated successfully. Feb 13 19:34:13.473648 containerd[1495]: time="2025-02-13T19:34:13.473624405Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:13.473648 containerd[1495]: time="2025-02-13T19:34:13.473646036Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:13.475096 containerd[1495]: time="2025-02-13T19:34:13.475067234Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:13.475563 containerd[1495]: time="2025-02-13T19:34:13.475380372Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:13.475563 containerd[1495]: time="2025-02-13T19:34:13.475396703Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:13.476047 containerd[1495]: time="2025-02-13T19:34:13.475942678Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:13.476477 containerd[1495]: time="2025-02-13T19:34:13.476449198Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:13.476477 containerd[1495]: time="2025-02-13T19:34:13.476466230Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:13.477260 containerd[1495]: time="2025-02-13T19:34:13.477239802Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:13.477341 containerd[1495]: time="2025-02-13T19:34:13.477323781Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:13.477341 containerd[1495]: time="2025-02-13T19:34:13.477338238Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:13.477763 containerd[1495]: time="2025-02-13T19:34:13.477742125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:13.478943 systemd[1]: run-netns-cni\x2dc102eca1\x2d0448\x2dc776\x2d9256\x2d6983ec83191a.mount: Deactivated successfully. Feb 13 19:34:13.703116 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:35354.service - OpenSSH per-connection server daemon (10.0.0.1:35354). Feb 13 19:34:13.879616 kubelet[2640]: I0213 19:34:13.879464 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417" Feb 13 19:34:13.880660 containerd[1495]: time="2025-02-13T19:34:13.880323355Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:13.880660 containerd[1495]: time="2025-02-13T19:34:13.880569899Z" level=info msg="Ensure that sandbox 8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417 in task-service has been cleanup successfully" Feb 13 19:34:13.883644 containerd[1495]: time="2025-02-13T19:34:13.883348384Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:13.883644 containerd[1495]: time="2025-02-13T19:34:13.883377038Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:13.884442 containerd[1495]: time="2025-02-13T19:34:13.884401732Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:13.884615 containerd[1495]: time="2025-02-13T19:34:13.884528920Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:13.884615 containerd[1495]: time="2025-02-13T19:34:13.884552484Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:13.885213 kubelet[2640]: I0213 19:34:13.885132 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5" Feb 13 19:34:13.885681 containerd[1495]: time="2025-02-13T19:34:13.885601744Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.885774950Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.885815386Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.885826356Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.886332026Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.886446861Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:13.886894 containerd[1495]: time="2025-02-13T19:34:13.886474884Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:13.888235 containerd[1495]: time="2025-02-13T19:34:13.888203318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:13.889333 containerd[1495]: time="2025-02-13T19:34:13.889301019Z" level=info msg="Ensure that sandbox 810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5 in task-service has been cleanup successfully" Feb 13 19:34:13.889866 containerd[1495]: time="2025-02-13T19:34:13.889814492Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:13.889866 containerd[1495]: time="2025-02-13T19:34:13.889855950Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:13.892492 containerd[1495]: time="2025-02-13T19:34:13.892455601Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:13.893246 containerd[1495]: time="2025-02-13T19:34:13.892674943Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:13.893246 containerd[1495]: time="2025-02-13T19:34:13.892801019Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:13.893246 containerd[1495]: time="2025-02-13T19:34:13.893045859Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:13.893246 containerd[1495]: time="2025-02-13T19:34:13.893126891Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:13.893246 containerd[1495]: time="2025-02-13T19:34:13.893135778Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:13.893470 containerd[1495]: time="2025-02-13T19:34:13.893442423Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:13.893608 containerd[1495]: time="2025-02-13T19:34:13.893558762Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:13.893648 containerd[1495]: time="2025-02-13T19:34:13.893634745Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:13.895491 containerd[1495]: time="2025-02-13T19:34:13.895404917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:13.916308 systemd[1]: run-netns-cni\x2ddbf17048\x2d7008\x2d0524\x2d3b53\x2d7ec2d6c894de.mount: Deactivated successfully. Feb 13 19:34:13.916454 systemd[1]: run-netns-cni\x2defd54e97\x2de0a0\x2d8beb\x2d47c2\x2d8286f9a5a4a5.mount: Deactivated successfully. Feb 13 19:34:13.948278 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 35354 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:13.949653 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:13.955833 systemd-logind[1480]: New session 11 of user core. Feb 13 19:34:13.960612 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:34:14.107650 containerd[1495]: time="2025-02-13T19:34:14.107580198Z" level=error msg="Failed to destroy network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.109873 containerd[1495]: time="2025-02-13T19:34:14.109613906Z" level=error msg="encountered an error cleaning up failed sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.109873 containerd[1495]: time="2025-02-13T19:34:14.109694908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.112392 kubelet[2640]: E0213 19:34:14.110864 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.112392 kubelet[2640]: E0213 19:34:14.111733 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:14.112392 kubelet[2640]: E0213 19:34:14.111767 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:14.114531 kubelet[2640]: E0213 19:34:14.112480 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:14.130181 containerd[1495]: time="2025-02-13T19:34:14.130007007Z" level=error msg="Failed to destroy network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140309 containerd[1495]: time="2025-02-13T19:34:14.131544954Z" level=error msg="encountered an error cleaning up failed sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140309 containerd[1495]: time="2025-02-13T19:34:14.131603514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140309 containerd[1495]: time="2025-02-13T19:34:14.135082865Z" level=error msg="Failed to destroy network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140309 containerd[1495]: time="2025-02-13T19:34:14.136381823Z" level=error msg="encountered an error cleaning up failed sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140309 containerd[1495]: time="2025-02-13T19:34:14.136594153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140768 kubelet[2640]: E0213 19:34:14.132640 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140768 kubelet[2640]: E0213 19:34:14.132737 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:14.140768 kubelet[2640]: E0213 19:34:14.132777 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:14.140979 kubelet[2640]: E0213 19:34:14.132873 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:14.140979 kubelet[2640]: E0213 19:34:14.136925 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.140979 kubelet[2640]: E0213 19:34:14.136997 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:14.141177 kubelet[2640]: E0213 19:34:14.137019 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:14.141177 kubelet[2640]: E0213 19:34:14.137072 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:14.149501 containerd[1495]: time="2025-02-13T19:34:14.149239068Z" level=error msg="Failed to destroy network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.149893 containerd[1495]: time="2025-02-13T19:34:14.149769073Z" level=error msg="encountered an error cleaning up failed sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.149893 containerd[1495]: time="2025-02-13T19:34:14.149832612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.150417 kubelet[2640]: E0213 19:34:14.150152 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.150417 kubelet[2640]: E0213 19:34:14.150273 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:14.150417 kubelet[2640]: E0213 19:34:14.150296 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:14.150620 kubelet[2640]: E0213 19:34:14.150351 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:14.171214 containerd[1495]: time="2025-02-13T19:34:14.170793670Z" level=error msg="Failed to destroy network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.173090 containerd[1495]: time="2025-02-13T19:34:14.172970246Z" level=error msg="encountered an error cleaning up failed sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.173881 containerd[1495]: time="2025-02-13T19:34:14.173391637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.174511 kubelet[2640]: E0213 19:34:14.174242 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.174511 kubelet[2640]: E0213 19:34:14.174326 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:14.174511 kubelet[2640]: E0213 19:34:14.174350 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:14.174685 kubelet[2640]: E0213 19:34:14.174400 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:14.180808 containerd[1495]: time="2025-02-13T19:34:14.179649113Z" level=error msg="Failed to destroy network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.181762 containerd[1495]: time="2025-02-13T19:34:14.181156172Z" level=error msg="encountered an error cleaning up failed sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.181762 containerd[1495]: time="2025-02-13T19:34:14.181265096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.181835 kubelet[2640]: E0213 19:34:14.181552 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:14.181835 kubelet[2640]: E0213 19:34:14.181617 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:14.181835 kubelet[2640]: E0213 19:34:14.181639 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:14.181966 kubelet[2640]: E0213 19:34:14.181684 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:14.191248 sshd[4410]: Connection closed by 10.0.0.1 port 35354 Feb 13 19:34:14.191657 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:14.196288 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:35354.service: Deactivated successfully. Feb 13 19:34:14.200002 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:34:14.203568 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:34:14.205728 systemd-logind[1480]: Removed session 11. Feb 13 19:34:14.902020 kubelet[2640]: I0213 19:34:14.901980 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219" Feb 13 19:34:14.903982 containerd[1495]: time="2025-02-13T19:34:14.903258888Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:14.903982 containerd[1495]: time="2025-02-13T19:34:14.903487376Z" level=info msg="Ensure that sandbox 9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219 in task-service has been cleanup successfully" Feb 13 19:34:14.903982 containerd[1495]: time="2025-02-13T19:34:14.903686360Z" level=info msg="TearDown network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" successfully" Feb 13 19:34:14.903982 containerd[1495]: time="2025-02-13T19:34:14.903699675Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" returns successfully" Feb 13 19:34:14.904867 containerd[1495]: time="2025-02-13T19:34:14.904564579Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:14.904867 containerd[1495]: time="2025-02-13T19:34:14.904636323Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:14.904867 containerd[1495]: time="2025-02-13T19:34:14.904645150Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:14.905145 containerd[1495]: time="2025-02-13T19:34:14.905110724Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:14.905234 containerd[1495]: time="2025-02-13T19:34:14.905219348Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:14.905234 containerd[1495]: time="2025-02-13T19:34:14.905231851Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:14.905536 containerd[1495]: time="2025-02-13T19:34:14.905516736Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:14.905599 containerd[1495]: time="2025-02-13T19:34:14.905585836Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:14.905599 containerd[1495]: time="2025-02-13T19:34:14.905596726Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:14.905972 containerd[1495]: time="2025-02-13T19:34:14.905895907Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:14.905972 containerd[1495]: time="2025-02-13T19:34:14.905966580Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:14.906068 containerd[1495]: time="2025-02-13T19:34:14.905975316Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:14.906309 kubelet[2640]: I0213 19:34:14.906282 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae" Feb 13 19:34:14.906862 containerd[1495]: time="2025-02-13T19:34:14.906753628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:14.907107 containerd[1495]: time="2025-02-13T19:34:14.907031709Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:14.907295 containerd[1495]: time="2025-02-13T19:34:14.907244590Z" level=info msg="Ensure that sandbox 1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae in task-service has been cleanup successfully" Feb 13 19:34:14.907657 containerd[1495]: time="2025-02-13T19:34:14.907635312Z" level=info msg="TearDown network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" successfully" Feb 13 19:34:14.907702 containerd[1495]: time="2025-02-13T19:34:14.907655039Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" returns successfully" Feb 13 19:34:14.908256 containerd[1495]: time="2025-02-13T19:34:14.908119762Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:14.908722 containerd[1495]: time="2025-02-13T19:34:14.908659916Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:14.908766 containerd[1495]: time="2025-02-13T19:34:14.908720590Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:14.909416 containerd[1495]: time="2025-02-13T19:34:14.909286673Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:14.909416 containerd[1495]: time="2025-02-13T19:34:14.909362726Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:14.909416 containerd[1495]: time="2025-02-13T19:34:14.909371532Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:14.910167 containerd[1495]: time="2025-02-13T19:34:14.910139143Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:14.910285 containerd[1495]: time="2025-02-13T19:34:14.910251664Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:14.910285 containerd[1495]: time="2025-02-13T19:34:14.910263987Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:14.910656 containerd[1495]: time="2025-02-13T19:34:14.910632830Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:14.910858 containerd[1495]: time="2025-02-13T19:34:14.910718651Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:14.910858 containerd[1495]: time="2025-02-13T19:34:14.910729992Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:14.910946 kubelet[2640]: I0213 19:34:14.910828 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a" Feb 13 19:34:14.911111 containerd[1495]: time="2025-02-13T19:34:14.911076583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:14.911531 containerd[1495]: time="2025-02-13T19:34:14.911351689Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:14.911531 containerd[1495]: time="2025-02-13T19:34:14.911554691Z" level=info msg="Ensure that sandbox 23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a in task-service has been cleanup successfully" Feb 13 19:34:14.912396 containerd[1495]: time="2025-02-13T19:34:14.912170506Z" level=info msg="TearDown network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" successfully" Feb 13 19:34:14.912396 containerd[1495]: time="2025-02-13T19:34:14.912206454Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" returns successfully" Feb 13 19:34:14.912603 containerd[1495]: time="2025-02-13T19:34:14.912579574Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:14.912690 containerd[1495]: time="2025-02-13T19:34:14.912672809Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:14.912829 containerd[1495]: time="2025-02-13T19:34:14.912688769Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:14.913330 containerd[1495]: time="2025-02-13T19:34:14.913307711Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:14.913588 containerd[1495]: time="2025-02-13T19:34:14.913403861Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:14.913588 containerd[1495]: time="2025-02-13T19:34:14.913415593Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:14.915993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec-shm.mount: Deactivated successfully. Feb 13 19:34:14.916161 kubelet[2640]: I0213 19:34:14.916134 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a" Feb 13 19:34:14.916674 containerd[1495]: time="2025-02-13T19:34:14.916651217Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:14.916724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9-shm.mount: Deactivated successfully. Feb 13 19:34:14.916806 systemd[1]: run-netns-cni\x2d22eb606e\x2d384f\x2d8295\x2dd0df\x2d3b7f91374256.mount: Deactivated successfully. Feb 13 19:34:14.916868 containerd[1495]: time="2025-02-13T19:34:14.916831365Z" level=info msg="Ensure that sandbox 5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a in task-service has been cleanup successfully" Feb 13 19:34:14.916883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae-shm.mount: Deactivated successfully. Feb 13 19:34:14.916981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a-shm.mount: Deactivated successfully. Feb 13 19:34:14.917163 containerd[1495]: time="2025-02-13T19:34:14.917136458Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:14.917397 containerd[1495]: time="2025-02-13T19:34:14.917245052Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:14.917397 containerd[1495]: time="2025-02-13T19:34:14.917290367Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:14.917951 containerd[1495]: time="2025-02-13T19:34:14.917858994Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:14.918252 containerd[1495]: time="2025-02-13T19:34:14.918231594Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:14.918442 containerd[1495]: time="2025-02-13T19:34:14.918250920Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:14.918442 containerd[1495]: time="2025-02-13T19:34:14.918420528Z" level=info msg="TearDown network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" successfully" Feb 13 19:34:14.918442 containerd[1495]: time="2025-02-13T19:34:14.918434905Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" returns successfully" Feb 13 19:34:14.919499 containerd[1495]: time="2025-02-13T19:34:14.919335296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:14.919861 containerd[1495]: time="2025-02-13T19:34:14.919831347Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:14.920137 containerd[1495]: time="2025-02-13T19:34:14.919934721Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:14.920137 containerd[1495]: time="2025-02-13T19:34:14.919952905Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:14.920378 containerd[1495]: time="2025-02-13T19:34:14.920325524Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:14.920451 containerd[1495]: time="2025-02-13T19:34:14.920414782Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:14.920451 containerd[1495]: time="2025-02-13T19:34:14.920433327Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:14.920945 containerd[1495]: time="2025-02-13T19:34:14.920918618Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:14.921125 containerd[1495]: time="2025-02-13T19:34:14.921016732Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:14.921125 containerd[1495]: time="2025-02-13T19:34:14.921074130Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:14.921879 containerd[1495]: time="2025-02-13T19:34:14.921796075Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:14.921940 containerd[1495]: time="2025-02-13T19:34:14.921895982Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:14.921940 containerd[1495]: time="2025-02-13T19:34:14.921907804Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:14.922006 kubelet[2640]: I0213 19:34:14.921955 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9" Feb 13 19:34:14.922218 kubelet[2640]: E0213 19:34:14.922176 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:14.922568 containerd[1495]: time="2025-02-13T19:34:14.922361306Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" Feb 13 19:34:14.922568 containerd[1495]: time="2025-02-13T19:34:14.922553376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:14.922720 containerd[1495]: time="2025-02-13T19:34:14.922552534Z" level=info msg="Ensure that sandbox 3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9 in task-service has been cleanup successfully" Feb 13 19:34:14.922808 systemd[1]: run-netns-cni\x2d37ea3295\x2d6d9d\x2d25c7\x2d0773\x2d8fb431376fe8.mount: Deactivated successfully. Feb 13 19:34:14.923123 containerd[1495]: time="2025-02-13T19:34:14.923101064Z" level=info msg="TearDown network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" successfully" Feb 13 19:34:14.923123 containerd[1495]: time="2025-02-13T19:34:14.923120040Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" returns successfully" Feb 13 19:34:14.924796 containerd[1495]: time="2025-02-13T19:34:14.924771210Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:14.924884 containerd[1495]: time="2025-02-13T19:34:14.924864265Z" level=info msg="TearDown network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" successfully" Feb 13 19:34:14.924937 containerd[1495]: time="2025-02-13T19:34:14.924881848Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" returns successfully" Feb 13 19:34:14.925133 containerd[1495]: time="2025-02-13T19:34:14.925097131Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:14.925452 containerd[1495]: time="2025-02-13T19:34:14.925432301Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:14.925500 containerd[1495]: time="2025-02-13T19:34:14.925450134Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:14.926866 systemd[1]: run-netns-cni\x2da6e13ea5\x2d2464\x2d5ef1\x2d4004\x2dfc91c084c14f.mount: Deactivated successfully. Feb 13 19:34:14.927638 containerd[1495]: time="2025-02-13T19:34:14.927599680Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:14.927758 kubelet[2640]: I0213 19:34:14.927735 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec" Feb 13 19:34:14.927832 containerd[1495]: time="2025-02-13T19:34:14.927739953Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:14.927832 containerd[1495]: time="2025-02-13T19:34:14.927753989Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:14.928224 containerd[1495]: time="2025-02-13T19:34:14.928078297Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928293181Z" level=info msg="Ensure that sandbox b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec in task-service has been cleanup successfully" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928381226Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928446659Z" level=info msg="TearDown network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" successfully" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928456778Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" returns successfully" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928470894Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:14.930067 containerd[1495]: time="2025-02-13T19:34:14.928483127Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:14.931177 containerd[1495]: time="2025-02-13T19:34:14.931144243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:14.931486 containerd[1495]: time="2025-02-13T19:34:14.931332807Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:14.931486 containerd[1495]: time="2025-02-13T19:34:14.931423498Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:14.931486 containerd[1495]: time="2025-02-13T19:34:14.931436702Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:14.932006 containerd[1495]: time="2025-02-13T19:34:14.931986314Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:14.932110 systemd[1]: run-netns-cni\x2ddae27771\x2d72a6\x2d7734\x2d68e5\x2d13404643ad7e.mount: Deactivated successfully. Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932154770Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932173044Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932482214Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932551295Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932560171Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:14.932849 containerd[1495]: time="2025-02-13T19:34:14.932771468Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:14.933004 containerd[1495]: time="2025-02-13T19:34:14.932854333Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:14.933004 containerd[1495]: time="2025-02-13T19:34:14.932864702Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:14.933060 kubelet[2640]: E0213 19:34:14.933033 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:14.933311 containerd[1495]: time="2025-02-13T19:34:14.933289280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:14.992316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46434960.mount: Deactivated successfully. Feb 13 19:34:15.687176 containerd[1495]: time="2025-02-13T19:34:15.687115868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.702924 containerd[1495]: time="2025-02-13T19:34:15.702846476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:34:15.716244 containerd[1495]: time="2025-02-13T19:34:15.716150978Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.746662 containerd[1495]: time="2025-02-13T19:34:15.746543297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.747342 containerd[1495]: time="2025-02-13T19:34:15.747143834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.402379764s" Feb 13 19:34:15.747342 containerd[1495]: time="2025-02-13T19:34:15.747225497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:34:15.767420 containerd[1495]: time="2025-02-13T19:34:15.767167319Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:34:15.793335 containerd[1495]: time="2025-02-13T19:34:15.793115104Z" level=info msg="CreateContainer within sandbox \"c6b7104b0df55afbcebd99b39647fd1162fefb4de88f91311c0a473051dc58e7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6d25c749fedbbfa429a99bcee3943cbbb471bd48806dbad094c4aff0635bc106\"" Feb 13 19:34:15.794983 containerd[1495]: time="2025-02-13T19:34:15.794942935Z" level=info msg="StartContainer for \"6d25c749fedbbfa429a99bcee3943cbbb471bd48806dbad094c4aff0635bc106\"" Feb 13 19:34:15.824479 containerd[1495]: time="2025-02-13T19:34:15.823982614Z" level=error msg="Failed to destroy network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.826537 containerd[1495]: time="2025-02-13T19:34:15.826495741Z" level=error msg="encountered an error cleaning up failed sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.826942 containerd[1495]: time="2025-02-13T19:34:15.826912103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.828248 kubelet[2640]: E0213 19:34:15.827671 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.828248 kubelet[2640]: E0213 19:34:15.827747 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:15.828248 kubelet[2640]: E0213 19:34:15.827782 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:15.828403 kubelet[2640]: E0213 19:34:15.827850 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:15.842501 containerd[1495]: time="2025-02-13T19:34:15.842439739Z" level=error msg="Failed to destroy network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.843078 containerd[1495]: time="2025-02-13T19:34:15.843054072Z" level=error msg="encountered an error cleaning up failed sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.843231 containerd[1495]: time="2025-02-13T19:34:15.843209895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.843638 kubelet[2640]: E0213 19:34:15.843586 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.843705 kubelet[2640]: E0213 19:34:15.843654 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:15.843705 kubelet[2640]: E0213 19:34:15.843681 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:15.843811 kubelet[2640]: E0213 19:34:15.843730 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:15.853322 containerd[1495]: time="2025-02-13T19:34:15.853257785Z" level=error msg="Failed to destroy network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.853909 containerd[1495]: time="2025-02-13T19:34:15.853868581Z" level=error msg="encountered an error cleaning up failed sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.854081 containerd[1495]: time="2025-02-13T19:34:15.854040344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.855508 kubelet[2640]: E0213 19:34:15.854444 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.855508 kubelet[2640]: E0213 19:34:15.854514 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:15.855508 kubelet[2640]: E0213 19:34:15.854533 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:15.855640 kubelet[2640]: E0213 19:34:15.854576 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:15.865466 containerd[1495]: time="2025-02-13T19:34:15.865396580Z" level=error msg="Failed to destroy network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.866149 containerd[1495]: time="2025-02-13T19:34:15.866121291Z" level=error msg="encountered an error cleaning up failed sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.866393 containerd[1495]: time="2025-02-13T19:34:15.866326105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.867169 kubelet[2640]: E0213 19:34:15.866722 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.867169 kubelet[2640]: E0213 19:34:15.866790 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:15.867169 kubelet[2640]: E0213 19:34:15.866816 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:15.867323 kubelet[2640]: E0213 19:34:15.866879 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:15.867640 containerd[1495]: time="2025-02-13T19:34:15.867607600Z" level=error msg="Failed to destroy network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.868016 containerd[1495]: time="2025-02-13T19:34:15.867985209Z" level=error msg="encountered an error cleaning up failed sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.868092 containerd[1495]: time="2025-02-13T19:34:15.868046414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.868304 kubelet[2640]: E0213 19:34:15.868237 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.868738 kubelet[2640]: E0213 19:34:15.868272 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:15.868738 kubelet[2640]: E0213 19:34:15.868504 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:15.868738 kubelet[2640]: E0213 19:34:15.868677 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:15.870060 containerd[1495]: time="2025-02-13T19:34:15.870004880Z" level=error msg="Failed to destroy network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.870399 containerd[1495]: time="2025-02-13T19:34:15.870374173Z" level=error msg="encountered an error cleaning up failed sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.870716 containerd[1495]: time="2025-02-13T19:34:15.870420049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.870780 kubelet[2640]: E0213 19:34:15.870554 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:15.870780 kubelet[2640]: E0213 19:34:15.870585 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:15.870780 kubelet[2640]: E0213 19:34:15.870604 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" Feb 13 19:34:15.871023 kubelet[2640]: E0213 19:34:15.870644 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7758bf7464-bz5p8_calico-system(b91e83e8-b503-4a59-bbef-bf279f88f9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podUID="b91e83e8-b503-4a59-bbef-bf279f88f9d9" Feb 13 19:34:15.903401 systemd[1]: Started cri-containerd-6d25c749fedbbfa429a99bcee3943cbbb471bd48806dbad094c4aff0635bc106.scope - libcontainer container 6d25c749fedbbfa429a99bcee3943cbbb471bd48806dbad094c4aff0635bc106. Feb 13 19:34:15.932774 kubelet[2640]: I0213 19:34:15.932738 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371" Feb 13 19:34:15.933466 containerd[1495]: time="2025-02-13T19:34:15.933430874Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" Feb 13 19:34:15.933754 containerd[1495]: time="2025-02-13T19:34:15.933682396Z" level=info msg="Ensure that sandbox 3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371 in task-service has been cleanup successfully" Feb 13 19:34:15.935471 containerd[1495]: time="2025-02-13T19:34:15.935419586Z" level=info msg="TearDown network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" successfully" Feb 13 19:34:15.935471 containerd[1495]: time="2025-02-13T19:34:15.935444694Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" returns successfully" Feb 13 19:34:15.937232 systemd[1]: run-netns-cni\x2d6ea7e2d5\x2d6c15\x2d51ec\x2d4c15\x2d0ae20425fe98.mount: Deactivated successfully. Feb 13 19:34:15.938461 containerd[1495]: time="2025-02-13T19:34:15.938423615Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:15.938551 containerd[1495]: time="2025-02-13T19:34:15.938533011Z" level=info msg="TearDown network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" successfully" Feb 13 19:34:15.938636 containerd[1495]: time="2025-02-13T19:34:15.938549522Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" returns successfully" Feb 13 19:34:15.940407 containerd[1495]: time="2025-02-13T19:34:15.940248642Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:15.940407 containerd[1495]: time="2025-02-13T19:34:15.940355111Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:15.940407 containerd[1495]: time="2025-02-13T19:34:15.940368587Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:15.940896 containerd[1495]: time="2025-02-13T19:34:15.940835313Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:15.941037 kubelet[2640]: I0213 19:34:15.940960 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972" Feb 13 19:34:15.941108 containerd[1495]: time="2025-02-13T19:34:15.941086063Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:15.941108 containerd[1495]: time="2025-02-13T19:34:15.941102093Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:15.941792 containerd[1495]: time="2025-02-13T19:34:15.941563920Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:15.941792 containerd[1495]: time="2025-02-13T19:34:15.941656534Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:15.941792 containerd[1495]: time="2025-02-13T19:34:15.941667525Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:15.945095 containerd[1495]: time="2025-02-13T19:34:15.942360265Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:15.945095 containerd[1495]: time="2025-02-13T19:34:15.942421249Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" Feb 13 19:34:15.945095 containerd[1495]: time="2025-02-13T19:34:15.942513803Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:15.945095 containerd[1495]: time="2025-02-13T19:34:15.942529352Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:15.945095 containerd[1495]: time="2025-02-13T19:34:15.942631424Z" level=info msg="Ensure that sandbox 524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972 in task-service has been cleanup successfully" Feb 13 19:34:15.945689 containerd[1495]: time="2025-02-13T19:34:15.945263244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:6,}" Feb 13 19:34:15.947821 containerd[1495]: time="2025-02-13T19:34:15.947796539Z" level=info msg="TearDown network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" successfully" Feb 13 19:34:15.947953 containerd[1495]: time="2025-02-13T19:34:15.947892630Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" returns successfully" Feb 13 19:34:15.948234 systemd[1]: run-netns-cni\x2d0475d8ec\x2d15b5\x2d4730\x2d7611\x2d43b8c7306d8a.mount: Deactivated successfully. Feb 13 19:34:15.949226 containerd[1495]: time="2025-02-13T19:34:15.948323588Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:15.949226 containerd[1495]: time="2025-02-13T19:34:15.949173374Z" level=info msg="TearDown network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" successfully" Feb 13 19:34:15.949351 containerd[1495]: time="2025-02-13T19:34:15.949204873Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" returns successfully" Feb 13 19:34:15.949784 containerd[1495]: time="2025-02-13T19:34:15.949603100Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:15.949784 containerd[1495]: time="2025-02-13T19:34:15.949702156Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:15.949784 containerd[1495]: time="2025-02-13T19:34:15.949716483Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:15.950063 containerd[1495]: time="2025-02-13T19:34:15.950035722Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:15.950390 containerd[1495]: time="2025-02-13T19:34:15.950313273Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:15.950390 containerd[1495]: time="2025-02-13T19:34:15.950332630Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:15.950706 containerd[1495]: time="2025-02-13T19:34:15.950679821Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:15.950846 containerd[1495]: time="2025-02-13T19:34:15.950779217Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:15.950846 containerd[1495]: time="2025-02-13T19:34:15.950796010Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:15.951285 containerd[1495]: time="2025-02-13T19:34:15.951100531Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:15.951285 containerd[1495]: time="2025-02-13T19:34:15.951218783Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:15.951285 containerd[1495]: time="2025-02-13T19:34:15.951233029Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:15.951569 containerd[1495]: time="2025-02-13T19:34:15.951490353Z" level=info msg="StartContainer for \"6d25c749fedbbfa429a99bcee3943cbbb471bd48806dbad094c4aff0635bc106\" returns successfully" Feb 13 19:34:15.952007 kubelet[2640]: I0213 19:34:15.951750 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5" Feb 13 19:34:15.952114 containerd[1495]: time="2025-02-13T19:34:15.951763766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:15.952917 containerd[1495]: time="2025-02-13T19:34:15.952622328Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" Feb 13 19:34:15.952917 containerd[1495]: time="2025-02-13T19:34:15.952826280Z" level=info msg="Ensure that sandbox 6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5 in task-service has been cleanup successfully" Feb 13 19:34:15.957065 containerd[1495]: time="2025-02-13T19:34:15.956937497Z" level=info msg="TearDown network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" successfully" Feb 13 19:34:15.957065 containerd[1495]: time="2025-02-13T19:34:15.956986048Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" returns successfully" Feb 13 19:34:15.958378 containerd[1495]: time="2025-02-13T19:34:15.958087947Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:15.958378 containerd[1495]: time="2025-02-13T19:34:15.958241905Z" level=info msg="TearDown network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" successfully" Feb 13 19:34:15.958378 containerd[1495]: time="2025-02-13T19:34:15.958258376Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" returns successfully" Feb 13 19:34:15.958512 systemd[1]: run-netns-cni\x2d6e4d1c50\x2dbb00\x2d56ee\x2d4b31\x2da41889847256.mount: Deactivated successfully. Feb 13 19:34:15.959744 containerd[1495]: time="2025-02-13T19:34:15.959508923Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:15.961327 containerd[1495]: time="2025-02-13T19:34:15.961299434Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:15.962040 containerd[1495]: time="2025-02-13T19:34:15.961956648Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:15.962594 containerd[1495]: time="2025-02-13T19:34:15.962570711Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:15.962976 containerd[1495]: time="2025-02-13T19:34:15.962824878Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:15.962976 containerd[1495]: time="2025-02-13T19:34:15.962841269Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:15.963313 containerd[1495]: time="2025-02-13T19:34:15.963140109Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:15.963313 containerd[1495]: time="2025-02-13T19:34:15.963271036Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:15.963313 containerd[1495]: time="2025-02-13T19:34:15.963282517Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:15.963888 containerd[1495]: time="2025-02-13T19:34:15.963723404Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:15.963888 containerd[1495]: time="2025-02-13T19:34:15.963848670Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:15.964078 containerd[1495]: time="2025-02-13T19:34:15.963862887Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:15.965201 containerd[1495]: time="2025-02-13T19:34:15.965091052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:15.966483 kubelet[2640]: I0213 19:34:15.966457 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8" Feb 13 19:34:15.967586 containerd[1495]: time="2025-02-13T19:34:15.967556940Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" Feb 13 19:34:15.967817 containerd[1495]: time="2025-02-13T19:34:15.967773607Z" level=info msg="Ensure that sandbox a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8 in task-service has been cleanup successfully" Feb 13 19:34:15.968037 containerd[1495]: time="2025-02-13T19:34:15.968005092Z" level=info msg="TearDown network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" successfully" Feb 13 19:34:15.968124 containerd[1495]: time="2025-02-13T19:34:15.968105700Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" returns successfully" Feb 13 19:34:15.968694 containerd[1495]: time="2025-02-13T19:34:15.968664600Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:15.968791 containerd[1495]: time="2025-02-13T19:34:15.968764858Z" level=info msg="TearDown network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" successfully" Feb 13 19:34:15.968791 containerd[1495]: time="2025-02-13T19:34:15.968781729Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" returns successfully" Feb 13 19:34:15.970568 containerd[1495]: time="2025-02-13T19:34:15.970386311Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:15.970568 containerd[1495]: time="2025-02-13T19:34:15.970496648Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:15.970568 containerd[1495]: time="2025-02-13T19:34:15.970523198Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:15.971658 kubelet[2640]: I0213 19:34:15.971269 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f" Feb 13 19:34:15.971771 systemd[1]: run-netns-cni\x2d84fc7b6c\x2d0920\x2dede4\x2d62cb\x2d7ac0aa0fe6a7.mount: Deactivated successfully. Feb 13 19:34:15.974087 containerd[1495]: time="2025-02-13T19:34:15.972040346Z" level=info msg="StopPodSandbox for \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\"" Feb 13 19:34:15.974608 containerd[1495]: time="2025-02-13T19:34:15.974426345Z" level=info msg="Ensure that sandbox 6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f in task-service has been cleanup successfully" Feb 13 19:34:15.974608 containerd[1495]: time="2025-02-13T19:34:15.972516339Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:15.974703 containerd[1495]: time="2025-02-13T19:34:15.974632171Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:15.974703 containerd[1495]: time="2025-02-13T19:34:15.974646368Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:15.975362 containerd[1495]: time="2025-02-13T19:34:15.975165652Z" level=info msg="TearDown network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" successfully" Feb 13 19:34:15.975362 containerd[1495]: time="2025-02-13T19:34:15.975186201Z" level=info msg="StopPodSandbox for \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" returns successfully" Feb 13 19:34:15.975492 containerd[1495]: time="2025-02-13T19:34:15.975251533Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:15.975492 containerd[1495]: time="2025-02-13T19:34:15.975452500Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:15.975492 containerd[1495]: time="2025-02-13T19:34:15.975465635Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:15.975946 containerd[1495]: time="2025-02-13T19:34:15.975797208Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" Feb 13 19:34:15.976400 containerd[1495]: time="2025-02-13T19:34:15.976161802Z" level=info msg="TearDown network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" successfully" Feb 13 19:34:15.976400 containerd[1495]: time="2025-02-13T19:34:15.976181059Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" returns successfully" Feb 13 19:34:15.976400 containerd[1495]: time="2025-02-13T19:34:15.976244357Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:15.976400 containerd[1495]: time="2025-02-13T19:34:15.976331431Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:15.976400 containerd[1495]: time="2025-02-13T19:34:15.976345357Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:15.976602 kubelet[2640]: E0213 19:34:15.976578 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.977360 kubelet[2640]: I0213 19:34:15.976763 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00" Feb 13 19:34:15.977603 containerd[1495]: time="2025-02-13T19:34:15.977580054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:15.977847 containerd[1495]: time="2025-02-13T19:34:15.977827088Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:15.977934 containerd[1495]: time="2025-02-13T19:34:15.977917858Z" level=info msg="TearDown network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" successfully" Feb 13 19:34:15.977962 containerd[1495]: time="2025-02-13T19:34:15.977934219Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" returns successfully" Feb 13 19:34:15.977995 containerd[1495]: time="2025-02-13T19:34:15.977978853Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" Feb 13 19:34:15.978184 containerd[1495]: time="2025-02-13T19:34:15.978167317Z" level=info msg="Ensure that sandbox 0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00 in task-service has been cleanup successfully" Feb 13 19:34:15.978647 containerd[1495]: time="2025-02-13T19:34:15.978564843Z" level=info msg="TearDown network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" successfully" Feb 13 19:34:15.978647 containerd[1495]: time="2025-02-13T19:34:15.978586113Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" returns successfully" Feb 13 19:34:15.978832 containerd[1495]: time="2025-02-13T19:34:15.978790416Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:15.978907 containerd[1495]: time="2025-02-13T19:34:15.978889392Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:15.978941 containerd[1495]: time="2025-02-13T19:34:15.978904761Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:15.979261 containerd[1495]: time="2025-02-13T19:34:15.979239670Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:15.979454 containerd[1495]: time="2025-02-13T19:34:15.979127619Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:15.979633 containerd[1495]: time="2025-02-13T19:34:15.979618260Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:15.979745 containerd[1495]: time="2025-02-13T19:34:15.979731343Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:15.979846 containerd[1495]: time="2025-02-13T19:34:15.979715914Z" level=info msg="TearDown network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" successfully" Feb 13 19:34:15.979923 containerd[1495]: time="2025-02-13T19:34:15.979906892Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" returns successfully" Feb 13 19:34:15.980132 containerd[1495]: time="2025-02-13T19:34:15.980104472Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:15.980234 containerd[1495]: time="2025-02-13T19:34:15.980214328Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:15.980287 containerd[1495]: time="2025-02-13T19:34:15.980232392Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:15.980524 containerd[1495]: time="2025-02-13T19:34:15.980495076Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:15.980601 containerd[1495]: time="2025-02-13T19:34:15.980589964Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:15.980640 containerd[1495]: time="2025-02-13T19:34:15.980603269Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:15.980821 containerd[1495]: time="2025-02-13T19:34:15.980750254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:6,}" Feb 13 19:34:15.981843 containerd[1495]: time="2025-02-13T19:34:15.981820784Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:15.982040 containerd[1495]: time="2025-02-13T19:34:15.981996864Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:15.982040 containerd[1495]: time="2025-02-13T19:34:15.982014457Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:15.982594 containerd[1495]: time="2025-02-13T19:34:15.982529685Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:15.982754 containerd[1495]: time="2025-02-13T19:34:15.982610246Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:15.982754 containerd[1495]: time="2025-02-13T19:34:15.982619242Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:15.983088 containerd[1495]: time="2025-02-13T19:34:15.982978117Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:15.983139 containerd[1495]: time="2025-02-13T19:34:15.983089035Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:15.983139 containerd[1495]: time="2025-02-13T19:34:15.983103231Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:15.983599 kubelet[2640]: E0213 19:34:15.983327 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.984084 containerd[1495]: time="2025-02-13T19:34:15.983809237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:16.035362 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:34:16.035483 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:34:16.112999 containerd[1495]: time="2025-02-13T19:34:16.112936644Z" level=error msg="Failed to destroy network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.114447 containerd[1495]: time="2025-02-13T19:34:16.114235862Z" level=error msg="encountered an error cleaning up failed sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.114447 containerd[1495]: time="2025-02-13T19:34:16.114312145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.114656 kubelet[2640]: E0213 19:34:16.114606 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.114869 kubelet[2640]: E0213 19:34:16.114695 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:16.114869 kubelet[2640]: E0213 19:34:16.114727 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mrdz6" Feb 13 19:34:16.114869 kubelet[2640]: E0213 19:34:16.114780 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mrdz6_calico-system(1b3660a1-47a7-4062-b8e4-0e63486cf899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mrdz6" podUID="1b3660a1-47a7-4062-b8e4-0e63486cf899" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.212 [INFO][5095] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.213 [INFO][5095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" iface="eth0" netns="/var/run/netns/cni-6fc72fb9-a6f9-c758-bc4a-f0c1eb9c5049" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.213 [INFO][5095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" iface="eth0" netns="/var/run/netns/cni-6fc72fb9-a6f9-c758-bc4a-f0c1eb9c5049" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.213 [INFO][5095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" iface="eth0" netns="/var/run/netns/cni-6fc72fb9-a6f9-c758-bc4a-f0c1eb9c5049" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.213 [INFO][5095] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.213 [INFO][5095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" HandleID="k8s-pod-network.14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.298 [WARNING][5132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" HandleID="k8s-pod-network.14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.298 [INFO][5132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" HandleID="k8s-pod-network.14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.299 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:16.304381 containerd[1495]: 2025-02-13 19:34:16.302 [INFO][5095] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.181 [INFO][5024] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.182 [INFO][5024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" iface="eth0" netns="/var/run/netns/cni-1745a8bc-fe21-0266-412b-e86cc0a8454c" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.184 [INFO][5024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" iface="eth0" netns="/var/run/netns/cni-1745a8bc-fe21-0266-412b-e86cc0a8454c" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.188 [INFO][5024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" iface="eth0" netns="/var/run/netns/cni-1745a8bc-fe21-0266-412b-e86cc0a8454c" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.189 [INFO][5024] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.190 [INFO][5024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.288 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" HandleID="k8s-pod-network.bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.299 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.348 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" HandleID="k8s-pod-network.bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.348 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" HandleID="k8s-pod-network.bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.349 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:16.354712 containerd[1495]: 2025-02-13 19:34:16.351 [INFO][5024] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.196 [INFO][5058] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.197 [INFO][5058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" iface="eth0" netns="/var/run/netns/cni-42d7ad4c-8cea-27ce-2143-a8f7e97bab27" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.197 [INFO][5058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" iface="eth0" netns="/var/run/netns/cni-42d7ad4c-8cea-27ce-2143-a8f7e97bab27" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.197 [INFO][5058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" iface="eth0" netns="/var/run/netns/cni-42d7ad4c-8cea-27ce-2143-a8f7e97bab27" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.198 [INFO][5058] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.198 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.288 [INFO][5118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" HandleID="k8s-pod-network.06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.349 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.354 [WARNING][5118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" HandleID="k8s-pod-network.06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.354 [INFO][5118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" HandleID="k8s-pod-network.06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.355 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:16.361308 containerd[1495]: 2025-02-13 19:34:16.358 [INFO][5058] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.223 [INFO][5092] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.224 [INFO][5092] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" iface="eth0" netns="/var/run/netns/cni-eb408569-43e0-e15e-0a45-06d073f20c8b" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.224 [INFO][5092] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" iface="eth0" netns="/var/run/netns/cni-eb408569-43e0-e15e-0a45-06d073f20c8b" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.225 [INFO][5092] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" iface="eth0" netns="/var/run/netns/cni-eb408569-43e0-e15e-0a45-06d073f20c8b" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.225 [INFO][5092] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.225 [INFO][5092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" HandleID="k8s-pod-network.d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.355 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.421 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" HandleID="k8s-pod-network.d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.421 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" HandleID="k8s-pod-network.d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.423 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:16.428168 containerd[1495]: 2025-02-13 19:34:16.425 [INFO][5092] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0" Feb 13 19:34:16.576717 containerd[1495]: time="2025-02-13T19:34:16.576551653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.576971 kubelet[2640]: E0213 19:34:16.576924 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.577211 kubelet[2640]: E0213 19:34:16.577127 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:16.577372 kubelet[2640]: E0213 19:34:16.577316 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n72h7" Feb 13 19:34:16.577544 kubelet[2640]: E0213 19:34:16.577492 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n72h7_kube-system(7d4597d2-6027-4ade-9599-11a9fb3937e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14a1e2f46abbad73e19adc7270b36f71a8b5aa90d3897827a72a5ea3a2cf1f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n72h7" podUID="7d4597d2-6027-4ade-9599-11a9fb3937e8" Feb 13 19:34:16.580249 containerd[1495]: time="2025-02-13T19:34:16.580200251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.581654 kubelet[2640]: E0213 19:34:16.581575 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.581977 kubelet[2640]: E0213 19:34:16.581632 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:16.581977 kubelet[2640]: E0213 19:34:16.581869 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" Feb 13 19:34:16.582326 kubelet[2640]: E0213 19:34:16.581938 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-kmrnk_calico-apiserver(8000c7db-76d3-42a2-88ef-9e561c300a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc1393e767c69e60e1a8c253c5c25012a5d41830434b4330be4b35c5603416ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podUID="8000c7db-76d3-42a2-88ef-9e561c300a00" Feb 13 19:34:16.594310 systemd-networkd[1417]: caliad8d12a020a: Link UP Feb 13 19:34:16.595381 systemd-networkd[1417]: caliad8d12a020a: Gained carrier Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.139 [INFO][4995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.170 [INFO][4995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0 calico-kube-controllers-7758bf7464- calico-system b91e83e8-b503-4a59-bbef-bf279f88f9d9 779 0 2025-02-13 19:33:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7758bf7464 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7758bf7464-bz5p8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliad8d12a020a [] []}} ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.170 [INFO][4995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.289 [INFO][5121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" HandleID="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Workload="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.498 [INFO][5121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" HandleID="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Workload="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004058b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7758bf7464-bz5p8", "timestamp":"2025-02-13 19:34:16.28924293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.498 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.498 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.498 [INFO][5121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.500 [INFO][5121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.504 [INFO][5121] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.508 [INFO][5121] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.509 [INFO][5121] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.554 [INFO][5121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.554 [INFO][5121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.556 [INFO][5121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24 Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.571 [INFO][5121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.581 [INFO][5121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.581 [INFO][5121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" host="localhost" Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.581 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:16.609181 containerd[1495]: 2025-02-13 19:34:16.581 [INFO][5121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" HandleID="k8s-pod-network.c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Workload="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.585 [INFO][4995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0", GenerateName:"calico-kube-controllers-7758bf7464-", Namespace:"calico-system", SelfLink:"", UID:"b91e83e8-b503-4a59-bbef-bf279f88f9d9", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7758bf7464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7758bf7464-bz5p8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad8d12a020a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.586 [INFO][4995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.586 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad8d12a020a ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.594 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.594 [INFO][4995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0", GenerateName:"calico-kube-controllers-7758bf7464-", Namespace:"calico-system", SelfLink:"", UID:"b91e83e8-b503-4a59-bbef-bf279f88f9d9", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7758bf7464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24", Pod:"calico-kube-controllers-7758bf7464-bz5p8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliad8d12a020a", MAC:"06:bd:39:b5:75:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:16.610083 containerd[1495]: 2025-02-13 19:34:16.604 [INFO][4995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24" Namespace="calico-system" Pod="calico-kube-controllers-7758bf7464-bz5p8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7758bf7464--bz5p8-eth0" Feb 13 19:34:16.610083 containerd[1495]: time="2025-02-13T19:34:16.609843450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.610684 kubelet[2640]: E0213 19:34:16.610147 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.610684 kubelet[2640]: E0213 19:34:16.610263 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:16.610684 kubelet[2640]: E0213 19:34:16.610291 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-822rj" Feb 13 19:34:16.610793 kubelet[2640]: E0213 19:34:16.610342 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-822rj_kube-system(b4f5f379-1bf2-49a8-b809-0761222a6c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1120400cccc35894c61bf33a3ede50d3d490da586d87ceb7b3096986b83ace0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-822rj" podUID="b4f5f379-1bf2-49a8-b809-0761222a6c07" Feb 13 19:34:16.612210 containerd[1495]: time="2025-02-13T19:34:16.611410121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.612294 kubelet[2640]: E0213 19:34:16.611669 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:16.612294 kubelet[2640]: E0213 19:34:16.611728 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:16.612294 kubelet[2640]: E0213 19:34:16.611753 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" Feb 13 19:34:16.612403 kubelet[2640]: E0213 19:34:16.611813 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d7bff6f-h9gmf_calico-apiserver(9dd6108c-e0cd-41e5-bd6d-20be6e54c890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a50f2aa075c63b1b9e556a43ab8c6d7c710a576b59363856b86419dd92d121\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podUID="9dd6108c-e0cd-41e5-bd6d-20be6e54c890" Feb 13 19:34:16.674836 containerd[1495]: time="2025-02-13T19:34:16.674132548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:16.674836 containerd[1495]: time="2025-02-13T19:34:16.674785393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:16.674836 containerd[1495]: time="2025-02-13T19:34:16.674798247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.675110 containerd[1495]: time="2025-02-13T19:34:16.674877947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.700373 systemd[1]: Started cri-containerd-c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24.scope - libcontainer container c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24. Feb 13 19:34:16.713126 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:16.737512 containerd[1495]: time="2025-02-13T19:34:16.737468635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7758bf7464-bz5p8,Uid:b91e83e8-b503-4a59-bbef-bf279f88f9d9,Namespace:calico-system,Attempt:6,} returns sandbox id \"c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24\"" Feb 13 19:34:16.739298 containerd[1495]: time="2025-02-13T19:34:16.739260899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:34:16.918915 systemd[1]: run-netns-cni\x2da9720c23\x2da6e5\x2d90b3\x2d3b11\x2d48b4ad5c1f76.mount: Deactivated successfully. Feb 13 19:34:16.919029 systemd[1]: run-netns-cni\x2dedb85307\x2d25c0\x2d21dc\x2dcda4\x2db0c1cfb86a80.mount: Deactivated successfully. Feb 13 19:34:16.984127 kubelet[2640]: E0213 19:34:16.984072 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.988598 kubelet[2640]: I0213 19:34:16.988519 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870" Feb 13 19:34:16.988788 containerd[1495]: time="2025-02-13T19:34:16.988695184Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" Feb 13 19:34:16.989348 containerd[1495]: time="2025-02-13T19:34:16.988798268Z" level=info msg="TearDown network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" successfully" Feb 13 19:34:16.989348 containerd[1495]: time="2025-02-13T19:34:16.988808317Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" returns successfully" Feb 13 19:34:16.989348 containerd[1495]: time="2025-02-13T19:34:16.989252881Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:16.989348 containerd[1495]: time="2025-02-13T19:34:16.989333863Z" level=info msg="TearDown network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" successfully" Feb 13 19:34:16.989348 containerd[1495]: time="2025-02-13T19:34:16.989343872Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" returns successfully" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989383757Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989445032Z" level=info msg="TearDown network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" successfully" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989453017Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" returns successfully" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989485879Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989553105Z" level=info msg="TearDown network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" successfully" Feb 13 19:34:16.989585 containerd[1495]: time="2025-02-13T19:34:16.989560970Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" returns successfully" Feb 13 19:34:16.989856 containerd[1495]: time="2025-02-13T19:34:16.989614690Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" Feb 13 19:34:16.989856 containerd[1495]: time="2025-02-13T19:34:16.989675214Z" level=info msg="TearDown network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" successfully" Feb 13 19:34:16.989856 containerd[1495]: time="2025-02-13T19:34:16.989683409Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" returns successfully" Feb 13 19:34:16.990043 containerd[1495]: time="2025-02-13T19:34:16.989997608Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:16.990096 containerd[1495]: time="2025-02-13T19:34:16.990082739Z" level=info msg="TearDown network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" successfully" Feb 13 19:34:16.990146 containerd[1495]: time="2025-02-13T19:34:16.990094141Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" returns successfully" Feb 13 19:34:16.990146 containerd[1495]: time="2025-02-13T19:34:16.990128525Z" level=info msg="StopPodSandbox for \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\"" Feb 13 19:34:16.990346 containerd[1495]: time="2025-02-13T19:34:16.990330454Z" level=info msg="Ensure that sandbox 78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870 in task-service has been cleanup successfully" Feb 13 19:34:16.990525 containerd[1495]: time="2025-02-13T19:34:16.990501104Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:16.990656 containerd[1495]: time="2025-02-13T19:34:16.990641738Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990708213Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990757736Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990836434Z" level=info msg="TearDown network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" successfully" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990844990Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" returns successfully" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990876809Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990937864Z" level=info msg="TearDown network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" successfully" Feb 13 19:34:16.990972 containerd[1495]: time="2025-02-13T19:34:16.990945688Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" returns successfully" Feb 13 19:34:16.991612 containerd[1495]: time="2025-02-13T19:34:16.991466596Z" level=info msg="TearDown network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" successfully" Feb 13 19:34:16.991612 containerd[1495]: time="2025-02-13T19:34:16.991481665Z" level=info msg="StopPodSandbox for \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" returns successfully" Feb 13 19:34:16.991992 containerd[1495]: time="2025-02-13T19:34:16.991799691Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:16.991992 containerd[1495]: time="2025-02-13T19:34:16.991877187Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:16.991992 containerd[1495]: time="2025-02-13T19:34:16.991885673Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992179144Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992268852Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992278530Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992315139Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992375362Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992382695Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992422620Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992479196Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992486320Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992524242Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992578333Z" level=info msg="TearDown network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" successfully" Feb 13 19:34:16.992608 containerd[1495]: time="2025-02-13T19:34:16.992585667Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" returns successfully" Feb 13 19:34:16.992984 containerd[1495]: time="2025-02-13T19:34:16.992962444Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:16.993163 containerd[1495]: time="2025-02-13T19:34:16.993132452Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:16.993163 containerd[1495]: time="2025-02-13T19:34:16.993145727Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:16.995394 containerd[1495]: time="2025-02-13T19:34:16.995298358Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:16.995453 containerd[1495]: time="2025-02-13T19:34:16.995409417Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:16.995453 containerd[1495]: time="2025-02-13T19:34:16.995422601Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:16.995509 containerd[1495]: time="2025-02-13T19:34:16.995297366Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:16.995541 containerd[1495]: time="2025-02-13T19:34:16.995529512Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:16.995570 containerd[1495]: time="2025-02-13T19:34:16.995539961Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:16.995820 systemd[1]: run-netns-cni\x2dd5a2447f\x2d0c07\x2d5b9c\x2d6a37\x2db0f54cc9e02c.mount: Deactivated successfully. Feb 13 19:34:17.004368 containerd[1495]: time="2025-02-13T19:34:17.004223019Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004574959Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004592111Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004635573Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004683142Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004750760Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004758314Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004791196Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004849194Z" level=info msg="TearDown network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004856989Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" returns successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004919066Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.004996781Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:17.005039 containerd[1495]: time="2025-02-13T19:34:17.005018231Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:17.005609 containerd[1495]: time="2025-02-13T19:34:17.005108080Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:17.005699 containerd[1495]: time="2025-02-13T19:34:17.005581619Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:17.006827 containerd[1495]: time="2025-02-13T19:34:17.006583569Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:17.006827 containerd[1495]: time="2025-02-13T19:34:17.006679509Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:17.006827 containerd[1495]: time="2025-02-13T19:34:17.006692163Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:17.006827 containerd[1495]: time="2025-02-13T19:34:17.006748830Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:17.006827 containerd[1495]: time="2025-02-13T19:34:17.006821386Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:17.007184 containerd[1495]: time="2025-02-13T19:34:17.006832757Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:17.007184 containerd[1495]: time="2025-02-13T19:34:17.006909611Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:17.007184 containerd[1495]: time="2025-02-13T19:34:17.007077777Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:17.007184 containerd[1495]: time="2025-02-13T19:34:17.007146846Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:17.008089 containerd[1495]: time="2025-02-13T19:34:17.007809801Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:17.008089 containerd[1495]: time="2025-02-13T19:34:17.007898848Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:17.008089 containerd[1495]: time="2025-02-13T19:34:17.007910890Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:17.008089 containerd[1495]: time="2025-02-13T19:34:17.007967987Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:17.008184 kubelet[2640]: E0213 19:34:17.007514 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:17.008239 containerd[1495]: time="2025-02-13T19:34:17.008122498Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:17.008239 containerd[1495]: time="2025-02-13T19:34:17.008133889Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:17.008319 containerd[1495]: time="2025-02-13T19:34:17.008302015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:17.008557 containerd[1495]: time="2025-02-13T19:34:17.008516727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:17.008738 containerd[1495]: time="2025-02-13T19:34:17.008709128Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:17.008894 containerd[1495]: time="2025-02-13T19:34:17.008820899Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:17.008894 containerd[1495]: time="2025-02-13T19:34:17.008833402Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:17.009261 kubelet[2640]: E0213 19:34:17.009237 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:17.009918 containerd[1495]: time="2025-02-13T19:34:17.009887811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:17.011127 containerd[1495]: time="2025-02-13T19:34:17.010665580Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:17.011127 containerd[1495]: time="2025-02-13T19:34:17.010797609Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:17.011127 containerd[1495]: time="2025-02-13T19:34:17.010809872Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:17.011127 containerd[1495]: time="2025-02-13T19:34:17.010884662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:17.011860 containerd[1495]: time="2025-02-13T19:34:17.011806121Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:17.011972 containerd[1495]: time="2025-02-13T19:34:17.011887945Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:17.011972 containerd[1495]: time="2025-02-13T19:34:17.011901861Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:17.012292 containerd[1495]: time="2025-02-13T19:34:17.012267497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:7,}" Feb 13 19:34:17.346073 systemd-networkd[1417]: calic9a2c927720: Link UP Feb 13 19:34:17.346360 systemd-networkd[1417]: calic9a2c927720: Gained carrier Feb 13 19:34:17.355571 kubelet[2640]: I0213 19:34:17.355393 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d7xzm" podStartSLOduration=3.08474148 podStartE2EDuration="23.354734836s" podCreationTimestamp="2025-02-13 19:33:54 +0000 UTC" firstStartedPulling="2025-02-13 19:33:55.481275611 +0000 UTC m=+15.614592396" lastFinishedPulling="2025-02-13 19:34:15.751268967 +0000 UTC m=+35.884585752" observedRunningTime="2025-02-13 19:34:17.00646691 +0000 UTC m=+37.139783695" watchObservedRunningTime="2025-02-13 19:34:17.354734836 +0000 UTC m=+37.488051621" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.112 [INFO][5224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.128 [INFO][5224] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0 calico-apiserver-6c4d7bff6f- calico-apiserver 9dd6108c-e0cd-41e5-bd6d-20be6e54c890 947 0 2025-02-13 19:33:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4d7bff6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c4d7bff6f-h9gmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9a2c927720 [] []}} ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.128 [INFO][5224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.209 [INFO][5315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" HandleID="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.319 [INFO][5315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" HandleID="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000301d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c4d7bff6f-h9gmf", "timestamp":"2025-02-13 19:34:17.209452623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.320 [INFO][5315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.320 [INFO][5315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.320 [INFO][5315] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5315] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.326 [INFO][5315] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.330 [INFO][5315] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.331 [INFO][5315] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.333 [INFO][5315] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.333 [INFO][5315] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.334 [INFO][5315] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3 Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.337 [INFO][5315] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5315] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5315] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" host="localhost" Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:17.357441 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" HandleID="k8s-pod-network.71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.344 [INFO][5224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0", GenerateName:"calico-apiserver-6c4d7bff6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9dd6108c-e0cd-41e5-bd6d-20be6e54c890", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d7bff6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c4d7bff6f-h9gmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9a2c927720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.344 [INFO][5224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.344 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9a2c927720 ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.346 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.346 [INFO][5224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0", GenerateName:"calico-apiserver-6c4d7bff6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9dd6108c-e0cd-41e5-bd6d-20be6e54c890", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d7bff6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3", Pod:"calico-apiserver-6c4d7bff6f-h9gmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9a2c927720", MAC:"da:f6:a2:f0:a0:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.358131 containerd[1495]: 2025-02-13 19:34:17.354 [INFO][5224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-h9gmf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--h9gmf-eth0" Feb 13 19:34:17.379885 containerd[1495]: time="2025-02-13T19:34:17.379756678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:17.379885 containerd[1495]: time="2025-02-13T19:34:17.379824164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:17.379885 containerd[1495]: time="2025-02-13T19:34:17.379839724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.380241 containerd[1495]: time="2025-02-13T19:34:17.379929151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.413340 systemd[1]: Started cri-containerd-71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3.scope - libcontainer container 71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3. Feb 13 19:34:17.425405 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:17.457034 containerd[1495]: time="2025-02-13T19:34:17.456974273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-h9gmf,Uid:9dd6108c-e0cd-41e5-bd6d-20be6e54c890,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3\"" Feb 13 19:34:17.459679 systemd-networkd[1417]: cali76b6caa549b: Link UP Feb 13 19:34:17.460155 systemd-networkd[1417]: cali76b6caa549b: Gained carrier Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.136 [INFO][5237] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.157 [INFO][5237] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--n72h7-eth0 coredns-668d6bf9bc- kube-system 7d4597d2-6027-4ade-9599-11a9fb3937e8 948 0 2025-02-13 19:33:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-n72h7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali76b6caa549b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.158 [INFO][5237] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.231 [INFO][5334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" HandleID="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" HandleID="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002951f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-n72h7", "timestamp":"2025-02-13 19:34:17.231184701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.341 [INFO][5334] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.424 [INFO][5334] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.429 [INFO][5334] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.435 [INFO][5334] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.437 [INFO][5334] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.439 [INFO][5334] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.439 [INFO][5334] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.441 [INFO][5334] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2 Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.446 [INFO][5334] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5334] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5334] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" host="localhost" Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:17.472749 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" HandleID="k8s-pod-network.b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Workload="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.456 [INFO][5237] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n72h7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d4597d2-6027-4ade-9599-11a9fb3937e8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-n72h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76b6caa549b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.456 [INFO][5237] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.456 [INFO][5237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76b6caa549b ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.460 [INFO][5237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.460 [INFO][5237] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n72h7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7d4597d2-6027-4ade-9599-11a9fb3937e8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2", Pod:"coredns-668d6bf9bc-n72h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali76b6caa549b", MAC:"4a:37:ad:f6:9a:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.473457 containerd[1495]: 2025-02-13 19:34:17.469 [INFO][5237] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2" Namespace="kube-system" Pod="coredns-668d6bf9bc-n72h7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n72h7-eth0" Feb 13 19:34:17.504532 containerd[1495]: time="2025-02-13T19:34:17.504409151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:17.504662 containerd[1495]: time="2025-02-13T19:34:17.504486827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:17.504662 containerd[1495]: time="2025-02-13T19:34:17.504501585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.504662 containerd[1495]: time="2025-02-13T19:34:17.504600861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.528626 systemd[1]: Started cri-containerd-b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2.scope - libcontainer container b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2. Feb 13 19:34:17.543079 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:17.564051 systemd-networkd[1417]: cali04e7755c49e: Link UP Feb 13 19:34:17.564325 systemd-networkd[1417]: cali04e7755c49e: Gained carrier Feb 13 19:34:17.571659 containerd[1495]: time="2025-02-13T19:34:17.571618332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n72h7,Uid:7d4597d2-6027-4ade-9599-11a9fb3937e8,Namespace:kube-system,Attempt:6,} returns sandbox id \"b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2\"" Feb 13 19:34:17.572850 kubelet[2640]: E0213 19:34:17.572815 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:17.574955 containerd[1495]: time="2025-02-13T19:34:17.574906724Z" level=info msg="CreateContainer within sandbox \"b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.134 [INFO][5256] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.166 [INFO][5256] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--822rj-eth0 coredns-668d6bf9bc- kube-system b4f5f379-1bf2-49a8-b809-0761222a6c07 949 0 2025-02-13 19:33:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-822rj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04e7755c49e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.166 [INFO][5256] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.219 [INFO][5340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" HandleID="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" HandleID="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-822rj", "timestamp":"2025-02-13 19:34:17.219679998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.453 [INFO][5340] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.524 [INFO][5340] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.529 [INFO][5340] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.536 [INFO][5340] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.538 [INFO][5340] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.542 [INFO][5340] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.542 [INFO][5340] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.544 [INFO][5340] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.549 [INFO][5340] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5340] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5340] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" host="localhost" Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:17.581381 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" HandleID="k8s-pod-network.5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Workload="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.557 [INFO][5256] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--822rj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b4f5f379-1bf2-49a8-b809-0761222a6c07", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-822rj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7755c49e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.557 [INFO][5256] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.557 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04e7755c49e ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.565 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.566 [INFO][5256] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--822rj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b4f5f379-1bf2-49a8-b809-0761222a6c07", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d", Pod:"coredns-668d6bf9bc-822rj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04e7755c49e", MAC:"22:07:14:ff:89:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.582049 containerd[1495]: 2025-02-13 19:34:17.576 [INFO][5256] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d" Namespace="kube-system" Pod="coredns-668d6bf9bc-822rj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--822rj-eth0" Feb 13 19:34:17.598242 containerd[1495]: time="2025-02-13T19:34:17.598082953Z" level=info msg="CreateContainer within sandbox \"b0da10c068f70877cf4c5d34b80ea382c6a3d71c0147222893dc5ef1b50fefa2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c3236806e75abfa94ba37cc326c0ffbb273b177ea9e743cd45222f94b3906fa\"" Feb 13 19:34:17.599564 containerd[1495]: time="2025-02-13T19:34:17.598796342Z" level=info msg="StartContainer for \"9c3236806e75abfa94ba37cc326c0ffbb273b177ea9e743cd45222f94b3906fa\"" Feb 13 19:34:17.605706 containerd[1495]: time="2025-02-13T19:34:17.605613807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:17.605953 containerd[1495]: time="2025-02-13T19:34:17.605761133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:17.605953 containerd[1495]: time="2025-02-13T19:34:17.605791420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.605953 containerd[1495]: time="2025-02-13T19:34:17.605893602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.636386 systemd[1]: Started cri-containerd-5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d.scope - libcontainer container 5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d. Feb 13 19:34:17.640695 systemd[1]: Started cri-containerd-9c3236806e75abfa94ba37cc326c0ffbb273b177ea9e743cd45222f94b3906fa.scope - libcontainer container 9c3236806e75abfa94ba37cc326c0ffbb273b177ea9e743cd45222f94b3906fa. Feb 13 19:34:17.656368 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:17.694410 systemd-networkd[1417]: cali92400a76b6b: Link UP Feb 13 19:34:17.696415 systemd-networkd[1417]: cali92400a76b6b: Gained carrier Feb 13 19:34:17.719726 containerd[1495]: time="2025-02-13T19:34:17.719657930Z" level=info msg="StartContainer for \"9c3236806e75abfa94ba37cc326c0ffbb273b177ea9e743cd45222f94b3906fa\" returns successfully" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.105 [INFO][5234] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.144 [INFO][5234] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mrdz6-eth0 csi-node-driver- calico-system 1b3660a1-47a7-4062-b8e4-0e63486cf899 654 0 2025-02-13 19:33:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mrdz6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92400a76b6b [] []}} ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.145 [INFO][5234] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.227 [INFO][5321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" HandleID="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Workload="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" HandleID="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Workload="localhost-k8s-csi--node--driver--mrdz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004840e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mrdz6", "timestamp":"2025-02-13 19:34:17.227551302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.553 [INFO][5321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.625 [INFO][5321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.631 [INFO][5321] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.636 [INFO][5321] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.641 [INFO][5321] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.648 [INFO][5321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.648 [INFO][5321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.650 [INFO][5321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.657 [INFO][5321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" host="localhost" Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:17.721518 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" HandleID="k8s-pod-network.a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Workload="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.671 [INFO][5234] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mrdz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b3660a1-47a7-4062-b8e4-0e63486cf899", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mrdz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92400a76b6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.671 [INFO][5234] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.671 [INFO][5234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92400a76b6b ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.689 [INFO][5234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.690 [INFO][5234] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mrdz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b3660a1-47a7-4062-b8e4-0e63486cf899", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a", Pod:"csi-node-driver-mrdz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92400a76b6b", MAC:"3a:9f:0f:35:d1:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.723611 containerd[1495]: 2025-02-13 19:34:17.713 [INFO][5234] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a" Namespace="calico-system" Pod="csi-node-driver-mrdz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mrdz6-eth0" Feb 13 19:34:17.730223 containerd[1495]: time="2025-02-13T19:34:17.729708884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-822rj,Uid:b4f5f379-1bf2-49a8-b809-0761222a6c07,Namespace:kube-system,Attempt:6,} returns sandbox id \"5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d\"" Feb 13 19:34:17.731849 kubelet[2640]: E0213 19:34:17.731814 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:17.739820 containerd[1495]: time="2025-02-13T19:34:17.739767332Z" level=info msg="CreateContainer within sandbox \"5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:17.775266 containerd[1495]: time="2025-02-13T19:34:17.774462530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:17.775266 containerd[1495]: time="2025-02-13T19:34:17.774537511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:17.775266 containerd[1495]: time="2025-02-13T19:34:17.774551286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.775266 containerd[1495]: time="2025-02-13T19:34:17.774653138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.797464 containerd[1495]: time="2025-02-13T19:34:17.797143238Z" level=info msg="CreateContainer within sandbox \"5551add976c2d6c1d65d49463712ab9c26361d7179a9db88a25bda918ee3991d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5128b8c70d47d1cc4ca10c7aa4d193c5a2d8c3a91444ee0ee15ca363abcb27d8\"" Feb 13 19:34:17.799399 containerd[1495]: time="2025-02-13T19:34:17.799348978Z" level=info msg="StartContainer for \"5128b8c70d47d1cc4ca10c7aa4d193c5a2d8c3a91444ee0ee15ca363abcb27d8\"" Feb 13 19:34:17.819175 systemd-networkd[1417]: cali171a0c9bb37: Link UP Feb 13 19:34:17.819626 systemd-networkd[1417]: cali171a0c9bb37: Gained carrier Feb 13 19:34:17.852839 systemd[1]: Started cri-containerd-a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a.scope - libcontainer container a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a. Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.132 [INFO][5272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.152 [INFO][5272] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0 calico-apiserver-6c4d7bff6f- calico-apiserver 8000c7db-76d3-42a2-88ef-9e561c300a00 946 0 2025-02-13 19:33:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4d7bff6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c4d7bff6f-kmrnk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali171a0c9bb37 [] []}} ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.152 [INFO][5272] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.214 [INFO][5326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" HandleID="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" HandleID="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376a40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c4d7bff6f-kmrnk", "timestamp":"2025-02-13 19:34:17.214971411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.322 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.667 [INFO][5326] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.726 [INFO][5326] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.734 [INFO][5326] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.758 [INFO][5326] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.773 [INFO][5326] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.781 [INFO][5326] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.782 [INFO][5326] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.784 [INFO][5326] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03 Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.792 [INFO][5326] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.799 [INFO][5326] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.799 [INFO][5326] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" host="localhost" Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.799 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:17.875621 containerd[1495]: 2025-02-13 19:34:17.799 [INFO][5326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" HandleID="k8s-pod-network.8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Workload="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.807 [INFO][5272] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0", GenerateName:"calico-apiserver-6c4d7bff6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8000c7db-76d3-42a2-88ef-9e561c300a00", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d7bff6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c4d7bff6f-kmrnk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali171a0c9bb37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.807 [INFO][5272] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.807 [INFO][5272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali171a0c9bb37 ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.831 [INFO][5272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.839 [INFO][5272] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0", GenerateName:"calico-apiserver-6c4d7bff6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8000c7db-76d3-42a2-88ef-9e561c300a00", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d7bff6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03", Pod:"calico-apiserver-6c4d7bff6f-kmrnk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali171a0c9bb37", MAC:"c2:57:92:45:4b:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:17.876320 containerd[1495]: 2025-02-13 19:34:17.859 [INFO][5272] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d7bff6f-kmrnk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4d7bff6f--kmrnk-eth0" Feb 13 19:34:17.891366 systemd[1]: Started cri-containerd-5128b8c70d47d1cc4ca10c7aa4d193c5a2d8c3a91444ee0ee15ca363abcb27d8.scope - libcontainer container 5128b8c70d47d1cc4ca10c7aa4d193c5a2d8c3a91444ee0ee15ca363abcb27d8. Feb 13 19:34:17.927165 containerd[1495]: time="2025-02-13T19:34:17.925925623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:17.927165 containerd[1495]: time="2025-02-13T19:34:17.926004822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:17.927165 containerd[1495]: time="2025-02-13T19:34:17.926019860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.927165 containerd[1495]: time="2025-02-13T19:34:17.926099048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:17.962726 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:17.974392 systemd[1]: Started cri-containerd-8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03.scope - libcontainer container 8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03. Feb 13 19:34:17.974625 containerd[1495]: time="2025-02-13T19:34:17.974562847Z" level=info msg="StartContainer for \"5128b8c70d47d1cc4ca10c7aa4d193c5a2d8c3a91444ee0ee15ca363abcb27d8\" returns successfully" Feb 13 19:34:18.017612 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:18.026795 containerd[1495]: time="2025-02-13T19:34:18.026670664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mrdz6,Uid:1b3660a1-47a7-4062-b8e4-0e63486cf899,Namespace:calico-system,Attempt:7,} returns sandbox id \"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a\"" Feb 13 19:34:18.042687 kubelet[2640]: E0213 19:34:18.041867 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:18.053392 systemd-networkd[1417]: caliad8d12a020a: Gained IPv6LL Feb 13 19:34:18.054376 kubelet[2640]: E0213 19:34:18.053622 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:18.056142 kubelet[2640]: E0213 19:34:18.054959 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:18.071283 kernel: bpftool[5825]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:34:18.074756 containerd[1495]: time="2025-02-13T19:34:18.074709551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d7bff6f-kmrnk,Uid:8000c7db-76d3-42a2-88ef-9e561c300a00,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03\"" Feb 13 19:34:18.111305 kubelet[2640]: I0213 19:34:18.111124 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n72h7" podStartSLOduration=33.111102601 podStartE2EDuration="33.111102601s" podCreationTimestamp="2025-02-13 19:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:18.110306568 +0000 UTC m=+38.243623353" watchObservedRunningTime="2025-02-13 19:34:18.111102601 +0000 UTC m=+38.244419386" Feb 13 19:34:18.351235 systemd-networkd[1417]: vxlan.calico: Link UP Feb 13 19:34:18.351248 systemd-networkd[1417]: vxlan.calico: Gained carrier Feb 13 19:34:18.757426 systemd-networkd[1417]: cali04e7755c49e: Gained IPv6LL Feb 13 19:34:19.065327 kubelet[2640]: E0213 19:34:19.065298 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.065800 kubelet[2640]: E0213 19:34:19.065464 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.077346 systemd-networkd[1417]: cali76b6caa549b: Gained IPv6LL Feb 13 19:34:19.141403 systemd-networkd[1417]: calic9a2c927720: Gained IPv6LL Feb 13 19:34:19.209650 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:41006.service - OpenSSH per-connection server daemon (10.0.0.1:41006). Feb 13 19:34:19.270343 systemd-networkd[1417]: cali171a0c9bb37: Gained IPv6LL Feb 13 19:34:19.388743 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 41006 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:19.390727 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:19.395179 systemd-logind[1480]: New session 12 of user core. Feb 13 19:34:19.404330 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:34:19.525401 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Feb 13 19:34:19.551994 sshd[5938]: Connection closed by 10.0.0.1 port 41006 Feb 13 19:34:19.552550 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:19.557401 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:41006.service: Deactivated successfully. Feb 13 19:34:19.559849 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:34:19.560792 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:34:19.561826 systemd-logind[1480]: Removed session 12. Feb 13 19:34:19.590383 systemd-networkd[1417]: cali92400a76b6b: Gained IPv6LL Feb 13 19:34:19.777814 containerd[1495]: time="2025-02-13T19:34:19.777748144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:19.799789 containerd[1495]: time="2025-02-13T19:34:19.799713657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:34:19.802323 containerd[1495]: time="2025-02-13T19:34:19.802238444Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:19.839491 containerd[1495]: time="2025-02-13T19:34:19.839421956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:19.840310 containerd[1495]: time="2025-02-13T19:34:19.840280276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.100974914s" Feb 13 19:34:19.840310 containerd[1495]: time="2025-02-13T19:34:19.840313188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:34:19.841610 containerd[1495]: time="2025-02-13T19:34:19.841554498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:19.848300 containerd[1495]: time="2025-02-13T19:34:19.848262747Z" level=info msg="CreateContainer within sandbox \"c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:34:19.889324 containerd[1495]: time="2025-02-13T19:34:19.889263010Z" level=info msg="CreateContainer within sandbox \"c5c183af6a156ca0d554c60adbf0a58fa5a89bc43bd65c5cc13aa13e2b33cd24\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ca75baa6b4121548ba891dd147929a1e597cead2f1863ce7a3c512806fa4855c\"" Feb 13 19:34:19.889901 containerd[1495]: time="2025-02-13T19:34:19.889866142Z" level=info msg="StartContainer for \"ca75baa6b4121548ba891dd147929a1e597cead2f1863ce7a3c512806fa4855c\"" Feb 13 19:34:19.924353 systemd[1]: Started cri-containerd-ca75baa6b4121548ba891dd147929a1e597cead2f1863ce7a3c512806fa4855c.scope - libcontainer container ca75baa6b4121548ba891dd147929a1e597cead2f1863ce7a3c512806fa4855c. Feb 13 19:34:19.998121 containerd[1495]: time="2025-02-13T19:34:19.998030511Z" level=info msg="StartContainer for \"ca75baa6b4121548ba891dd147929a1e597cead2f1863ce7a3c512806fa4855c\" returns successfully" Feb 13 19:34:20.071712 kubelet[2640]: E0213 19:34:20.071285 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:20.075672 kubelet[2640]: E0213 19:34:20.075623 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:20.086073 kubelet[2640]: I0213 19:34:20.085966 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-822rj" podStartSLOduration=35.085904434 podStartE2EDuration="35.085904434s" podCreationTimestamp="2025-02-13 19:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:18.255675158 +0000 UTC m=+38.388991943" watchObservedRunningTime="2025-02-13 19:34:20.085904434 +0000 UTC m=+40.219221219" Feb 13 19:34:20.086307 kubelet[2640]: I0213 19:34:20.086279 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7758bf7464-bz5p8" podStartSLOduration=21.983745199 podStartE2EDuration="25.086272295s" podCreationTimestamp="2025-02-13 19:33:55 +0000 UTC" firstStartedPulling="2025-02-13 19:34:16.738821105 +0000 UTC m=+36.872137890" lastFinishedPulling="2025-02-13 19:34:19.841348201 +0000 UTC m=+39.974664986" observedRunningTime="2025-02-13 19:34:20.085668932 +0000 UTC m=+40.218985727" watchObservedRunningTime="2025-02-13 19:34:20.086272295 +0000 UTC m=+40.219589080" Feb 13 19:34:22.833564 containerd[1495]: time="2025-02-13T19:34:22.833478216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:22.881420 containerd[1495]: time="2025-02-13T19:34:22.881334976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:34:22.928337 containerd[1495]: time="2025-02-13T19:34:22.928281759Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:23.023048 containerd[1495]: time="2025-02-13T19:34:23.022976676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:23.024051 containerd[1495]: time="2025-02-13T19:34:23.024019423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.182433986s" Feb 13 19:34:23.024051 containerd[1495]: time="2025-02-13T19:34:23.024049970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:23.025653 containerd[1495]: time="2025-02-13T19:34:23.025622099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:34:23.026747 containerd[1495]: time="2025-02-13T19:34:23.026720340Z" level=info msg="CreateContainer within sandbox \"71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:23.085367 containerd[1495]: time="2025-02-13T19:34:23.085233834Z" level=info msg="CreateContainer within sandbox \"71039bef97d533c7bccd312397a14d8bbbe0ef84b2661472813713e939ac2cb3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e3c99f97bf7a4a5413bcaf2de34a23646069ba436ce809f50eb6a8aac133a94\"" Feb 13 19:34:23.086172 containerd[1495]: time="2025-02-13T19:34:23.086147007Z" level=info msg="StartContainer for \"0e3c99f97bf7a4a5413bcaf2de34a23646069ba436ce809f50eb6a8aac133a94\"" Feb 13 19:34:23.123439 systemd[1]: Started cri-containerd-0e3c99f97bf7a4a5413bcaf2de34a23646069ba436ce809f50eb6a8aac133a94.scope - libcontainer container 0e3c99f97bf7a4a5413bcaf2de34a23646069ba436ce809f50eb6a8aac133a94. Feb 13 19:34:23.459943 containerd[1495]: time="2025-02-13T19:34:23.459784538Z" level=info msg="StartContainer for \"0e3c99f97bf7a4a5413bcaf2de34a23646069ba436ce809f50eb6a8aac133a94\" returns successfully" Feb 13 19:34:24.174527 kubelet[2640]: I0213 19:34:24.174327 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-h9gmf" podStartSLOduration=25.608126979 podStartE2EDuration="31.174302356s" podCreationTimestamp="2025-02-13 19:33:53 +0000 UTC" firstStartedPulling="2025-02-13 19:34:17.458838542 +0000 UTC m=+37.592155327" lastFinishedPulling="2025-02-13 19:34:23.025013919 +0000 UTC m=+43.158330704" observedRunningTime="2025-02-13 19:34:24.174158926 +0000 UTC m=+44.307475711" watchObservedRunningTime="2025-02-13 19:34:24.174302356 +0000 UTC m=+44.307619141" Feb 13 19:34:24.564728 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). Feb 13 19:34:24.622995 sshd[6075]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:24.625886 sshd-session[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:24.630712 systemd-logind[1480]: New session 13 of user core. Feb 13 19:34:24.637434 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:34:24.776823 sshd[6077]: Connection closed by 10.0.0.1 port 39050 Feb 13 19:34:24.777270 sshd-session[6075]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:24.788374 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:39050.service: Deactivated successfully. Feb 13 19:34:24.790377 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:34:24.791856 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:34:24.801528 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:39056.service - OpenSSH per-connection server daemon (10.0.0.1:39056). Feb 13 19:34:24.802442 systemd-logind[1480]: Removed session 13. Feb 13 19:34:24.844495 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:24.846066 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:24.850326 systemd-logind[1480]: New session 14 of user core. Feb 13 19:34:24.862425 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:34:25.247000 sshd[6093]: Connection closed by 10.0.0.1 port 39056 Feb 13 19:34:25.247463 sshd-session[6091]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:25.256279 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:39056.service: Deactivated successfully. Feb 13 19:34:25.258441 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:34:25.260126 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:34:25.271263 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:39060.service - OpenSSH per-connection server daemon (10.0.0.1:39060). Feb 13 19:34:25.273083 systemd-logind[1480]: Removed session 14. Feb 13 19:34:25.315341 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 39060 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:25.316939 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:25.321243 systemd-logind[1480]: New session 15 of user core. Feb 13 19:34:25.332317 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:34:25.635619 sshd[6106]: Connection closed by 10.0.0.1 port 39060 Feb 13 19:34:25.636005 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:25.640869 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:39060.service: Deactivated successfully. Feb 13 19:34:25.643979 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:34:25.645234 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:34:25.646269 systemd-logind[1480]: Removed session 15. Feb 13 19:34:25.687164 containerd[1495]: time="2025-02-13T19:34:25.687096668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:25.688756 containerd[1495]: time="2025-02-13T19:34:25.688687282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:34:25.690597 containerd[1495]: time="2025-02-13T19:34:25.690499132Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:25.693929 containerd[1495]: time="2025-02-13T19:34:25.693878573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:25.694607 containerd[1495]: time="2025-02-13T19:34:25.694566202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.668903297s" Feb 13 19:34:25.694647 containerd[1495]: time="2025-02-13T19:34:25.694605947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:34:25.697782 containerd[1495]: time="2025-02-13T19:34:25.697734677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:25.700626 containerd[1495]: time="2025-02-13T19:34:25.700572753Z" level=info msg="CreateContainer within sandbox \"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:34:25.780506 containerd[1495]: time="2025-02-13T19:34:25.780444722Z" level=info msg="CreateContainer within sandbox \"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"298f6500a0aa08e268a30dad7239d906e1373be900a9c70f652c075f3fcce160\"" Feb 13 19:34:25.781763 containerd[1495]: time="2025-02-13T19:34:25.781724243Z" level=info msg="StartContainer for \"298f6500a0aa08e268a30dad7239d906e1373be900a9c70f652c075f3fcce160\"" Feb 13 19:34:25.822368 systemd[1]: Started cri-containerd-298f6500a0aa08e268a30dad7239d906e1373be900a9c70f652c075f3fcce160.scope - libcontainer container 298f6500a0aa08e268a30dad7239d906e1373be900a9c70f652c075f3fcce160. Feb 13 19:34:25.926084 containerd[1495]: time="2025-02-13T19:34:25.925860117Z" level=info msg="StartContainer for \"298f6500a0aa08e268a30dad7239d906e1373be900a9c70f652c075f3fcce160\" returns successfully" Feb 13 19:34:26.344381 containerd[1495]: time="2025-02-13T19:34:26.344325061Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.345275 containerd[1495]: time="2025-02-13T19:34:26.345221793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:34:26.347408 containerd[1495]: time="2025-02-13T19:34:26.347368791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 649.597716ms" Feb 13 19:34:26.347408 containerd[1495]: time="2025-02-13T19:34:26.347401072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:26.348254 containerd[1495]: time="2025-02-13T19:34:26.348232512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:34:26.349172 containerd[1495]: time="2025-02-13T19:34:26.349146537Z" level=info msg="CreateContainer within sandbox \"8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:26.365313 containerd[1495]: time="2025-02-13T19:34:26.365267393Z" level=info msg="CreateContainer within sandbox \"8a7f05648375f3d02179dfc887aa0bf021d0daf0347001b9b00f308cbd33cf03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53749eaf8d090ca9c23332bea736850f7cd6a374fe903984df37fb05681b7727\"" Feb 13 19:34:26.365868 containerd[1495]: time="2025-02-13T19:34:26.365777520Z" level=info msg="StartContainer for \"53749eaf8d090ca9c23332bea736850f7cd6a374fe903984df37fb05681b7727\"" Feb 13 19:34:26.394352 systemd[1]: Started cri-containerd-53749eaf8d090ca9c23332bea736850f7cd6a374fe903984df37fb05681b7727.scope - libcontainer container 53749eaf8d090ca9c23332bea736850f7cd6a374fe903984df37fb05681b7727. Feb 13 19:34:26.498072 containerd[1495]: time="2025-02-13T19:34:26.498010634Z" level=info msg="StartContainer for \"53749eaf8d090ca9c23332bea736850f7cd6a374fe903984df37fb05681b7727\" returns successfully" Feb 13 19:34:27.374372 kubelet[2640]: I0213 19:34:27.372775 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c4d7bff6f-kmrnk" podStartSLOduration=26.101478465 podStartE2EDuration="34.372749941s" podCreationTimestamp="2025-02-13 19:33:53 +0000 UTC" firstStartedPulling="2025-02-13 19:34:18.076776921 +0000 UTC m=+38.210093706" lastFinishedPulling="2025-02-13 19:34:26.348048377 +0000 UTC m=+46.481365182" observedRunningTime="2025-02-13 19:34:27.370730974 +0000 UTC m=+47.504047759" watchObservedRunningTime="2025-02-13 19:34:27.372749941 +0000 UTC m=+47.506066726" Feb 13 19:34:28.103507 kubelet[2640]: I0213 19:34:28.103465 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:30.103352 containerd[1495]: time="2025-02-13T19:34:30.103289265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.104109 containerd[1495]: time="2025-02-13T19:34:30.104047411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:34:30.105771 containerd[1495]: time="2025-02-13T19:34:30.105658253Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.108077 containerd[1495]: time="2025-02-13T19:34:30.108042720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:30.109008 containerd[1495]: time="2025-02-13T19:34:30.108979409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.760646649s" Feb 13 19:34:30.109182 containerd[1495]: time="2025-02-13T19:34:30.109014255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:34:30.111515 containerd[1495]: time="2025-02-13T19:34:30.111465832Z" level=info msg="CreateContainer within sandbox \"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:34:30.130812 containerd[1495]: time="2025-02-13T19:34:30.130760252Z" level=info msg="CreateContainer within sandbox \"a601194b064f8e32611c8b231e6762614cb3c197abff076f02e81641c7a0938a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1ad354988c0d5a56798eba33c071ed514ab629e91ac9d1818cdd1466acd81055\"" Feb 13 19:34:30.131593 containerd[1495]: time="2025-02-13T19:34:30.131540861Z" level=info msg="StartContainer for \"1ad354988c0d5a56798eba33c071ed514ab629e91ac9d1818cdd1466acd81055\"" Feb 13 19:34:30.179471 systemd[1]: Started cri-containerd-1ad354988c0d5a56798eba33c071ed514ab629e91ac9d1818cdd1466acd81055.scope - libcontainer container 1ad354988c0d5a56798eba33c071ed514ab629e91ac9d1818cdd1466acd81055. Feb 13 19:34:30.214707 containerd[1495]: time="2025-02-13T19:34:30.214609880Z" level=info msg="StartContainer for \"1ad354988c0d5a56798eba33c071ed514ab629e91ac9d1818cdd1466acd81055\" returns successfully" Feb 13 19:34:30.649953 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:39068.service - OpenSSH per-connection server daemon (10.0.0.1:39068). Feb 13 19:34:30.707184 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 39068 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:30.709536 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:30.714361 systemd-logind[1480]: New session 16 of user core. Feb 13 19:34:30.720382 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:34:30.850949 sshd[6251]: Connection closed by 10.0.0.1 port 39068 Feb 13 19:34:30.851324 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:30.854803 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:39068.service: Deactivated successfully. Feb 13 19:34:30.856743 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:34:30.857363 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:34:30.858450 systemd-logind[1480]: Removed session 16. Feb 13 19:34:31.047681 kubelet[2640]: I0213 19:34:31.047647 2640 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:34:31.047681 kubelet[2640]: I0213 19:34:31.047686 2640 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:34:35.871254 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:36342.service - OpenSSH per-connection server daemon (10.0.0.1:36342). Feb 13 19:34:35.916052 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 36342 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:35.917729 sshd-session[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:35.922479 systemd-logind[1480]: New session 17 of user core. Feb 13 19:34:35.934446 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:34:36.066585 sshd[6269]: Connection closed by 10.0.0.1 port 36342 Feb 13 19:34:36.067132 sshd-session[6267]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:36.073583 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:36342.service: Deactivated successfully. Feb 13 19:34:36.076098 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:34:36.076784 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:34:36.077782 systemd-logind[1480]: Removed session 17. Feb 13 19:34:39.970904 containerd[1495]: time="2025-02-13T19:34:39.970849431Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:39.971444 containerd[1495]: time="2025-02-13T19:34:39.970992995Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:39.971444 containerd[1495]: time="2025-02-13T19:34:39.971007163Z" level=info msg="StopPodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:39.971444 containerd[1495]: time="2025-02-13T19:34:39.971399583Z" level=info msg="RemovePodSandbox for \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:39.981823 containerd[1495]: time="2025-02-13T19:34:39.981778827Z" level=info msg="Forcibly stopping sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\"" Feb 13 19:34:39.981964 containerd[1495]: time="2025-02-13T19:34:39.981907282Z" level=info msg="TearDown network for sandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" successfully" Feb 13 19:34:40.084506 containerd[1495]: time="2025-02-13T19:34:40.084422900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.084667 containerd[1495]: time="2025-02-13T19:34:40.084570050Z" level=info msg="RemovePodSandbox \"274620e275fe5d95a0e86f3fe8c74a7b311c5f19e94dac07faad5c451cd706dd\" returns successfully" Feb 13 19:34:40.085354 containerd[1495]: time="2025-02-13T19:34:40.085288191Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:40.085529 containerd[1495]: time="2025-02-13T19:34:40.085402099Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:40.085529 containerd[1495]: time="2025-02-13T19:34:40.085413420Z" level=info msg="StopPodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:40.085948 containerd[1495]: time="2025-02-13T19:34:40.085905840Z" level=info msg="RemovePodSandbox for \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:40.086052 containerd[1495]: time="2025-02-13T19:34:40.085955455Z" level=info msg="Forcibly stopping sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\"" Feb 13 19:34:40.086158 containerd[1495]: time="2025-02-13T19:34:40.086100251Z" level=info msg="TearDown network for sandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" successfully" Feb 13 19:34:40.107026 containerd[1495]: time="2025-02-13T19:34:40.106940477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.107172 containerd[1495]: time="2025-02-13T19:34:40.107058051Z" level=info msg="RemovePodSandbox \"1bd2dd3c3c280c39633f7eaaf56087852cfa62ce459983514c674a0ef0e61b01\" returns successfully" Feb 13 19:34:40.107657 containerd[1495]: time="2025-02-13T19:34:40.107622939Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:40.107790 containerd[1495]: time="2025-02-13T19:34:40.107755072Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:40.107790 containerd[1495]: time="2025-02-13T19:34:40.107773517Z" level=info msg="StopPodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:40.108108 containerd[1495]: time="2025-02-13T19:34:40.108078660Z" level=info msg="RemovePodSandbox for \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:40.108162 containerd[1495]: time="2025-02-13T19:34:40.108105121Z" level=info msg="Forcibly stopping sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\"" Feb 13 19:34:40.108279 containerd[1495]: time="2025-02-13T19:34:40.108226041Z" level=info msg="TearDown network for sandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" successfully" Feb 13 19:34:40.150478 containerd[1495]: time="2025-02-13T19:34:40.150398501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.150478 containerd[1495]: time="2025-02-13T19:34:40.150508161Z" level=info msg="RemovePodSandbox \"b8c23871a023cf6a6ee64c233ac151f2195fdcf9435bf5f89c1f3bcba5bd6a00\" returns successfully" Feb 13 19:34:40.151293 containerd[1495]: time="2025-02-13T19:34:40.151242623Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:40.151514 containerd[1495]: time="2025-02-13T19:34:40.151426123Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:40.151514 containerd[1495]: time="2025-02-13T19:34:40.151498823Z" level=info msg="StopPodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:40.152143 containerd[1495]: time="2025-02-13T19:34:40.152092887Z" level=info msg="RemovePodSandbox for \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:40.152143 containerd[1495]: time="2025-02-13T19:34:40.152150126Z" level=info msg="Forcibly stopping sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\"" Feb 13 19:34:40.152393 containerd[1495]: time="2025-02-13T19:34:40.152298840Z" level=info msg="TearDown network for sandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" successfully" Feb 13 19:34:40.238108 containerd[1495]: time="2025-02-13T19:34:40.237917141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.238108 containerd[1495]: time="2025-02-13T19:34:40.238007283Z" level=info msg="RemovePodSandbox \"70126dd4c5c11906f8b0ebe4d56cd55a4eb80e4863ae8850940ea20fddc74863\" returns successfully" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.238811659Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.238994949Z" level=info msg="TearDown network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" successfully" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.239059071Z" level=info msg="StopPodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" returns successfully" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.239451541Z" level=info msg="RemovePodSandbox for \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.239485335Z" level=info msg="Forcibly stopping sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\"" Feb 13 19:34:40.239986 containerd[1495]: time="2025-02-13T19:34:40.239576479Z" level=info msg="TearDown network for sandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" successfully" Feb 13 19:34:40.358963 containerd[1495]: time="2025-02-13T19:34:40.358902051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.359142 containerd[1495]: time="2025-02-13T19:34:40.359007702Z" level=info msg="RemovePodSandbox \"1de3d728a569ec5ae6b702b850030f6a2adca21fd646101ba3167a611da348ae\" returns successfully" Feb 13 19:34:40.359690 containerd[1495]: time="2025-02-13T19:34:40.359627134Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" Feb 13 19:34:40.359858 containerd[1495]: time="2025-02-13T19:34:40.359756181Z" level=info msg="TearDown network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" successfully" Feb 13 19:34:40.359858 containerd[1495]: time="2025-02-13T19:34:40.359768154Z" level=info msg="StopPodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" returns successfully" Feb 13 19:34:40.360837 containerd[1495]: time="2025-02-13T19:34:40.360804202Z" level=info msg="RemovePodSandbox for \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" Feb 13 19:34:40.360917 containerd[1495]: time="2025-02-13T19:34:40.360852514Z" level=info msg="Forcibly stopping sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\"" Feb 13 19:34:40.361047 containerd[1495]: time="2025-02-13T19:34:40.360981190Z" level=info msg="TearDown network for sandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" successfully" Feb 13 19:34:40.490094 containerd[1495]: time="2025-02-13T19:34:40.489835232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.490094 containerd[1495]: time="2025-02-13T19:34:40.489935883Z" level=info msg="RemovePodSandbox \"524670f1820763f6d157c7f537987987d4f5ed875814e316380dcf0134d70972\" returns successfully" Feb 13 19:34:40.490569 containerd[1495]: time="2025-02-13T19:34:40.490537702Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:40.490807 containerd[1495]: time="2025-02-13T19:34:40.490761289Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:40.490807 containerd[1495]: time="2025-02-13T19:34:40.490786337Z" level=info msg="StopPodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:40.491209 containerd[1495]: time="2025-02-13T19:34:40.491155291Z" level=info msg="RemovePodSandbox for \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:40.491209 containerd[1495]: time="2025-02-13T19:34:40.491183435Z" level=info msg="Forcibly stopping sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\"" Feb 13 19:34:40.491335 containerd[1495]: time="2025-02-13T19:34:40.491280471Z" level=info msg="TearDown network for sandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" successfully" Feb 13 19:34:40.526744 containerd[1495]: time="2025-02-13T19:34:40.526672607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.526923 containerd[1495]: time="2025-02-13T19:34:40.526782777Z" level=info msg="RemovePodSandbox \"fbc8937aa2f62ec663e2b5fa3326be908e6bbef778c87db9860ed4c693053dde\" returns successfully" Feb 13 19:34:40.527438 containerd[1495]: time="2025-02-13T19:34:40.527400056Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:40.527564 containerd[1495]: time="2025-02-13T19:34:40.527532658Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:40.527564 containerd[1495]: time="2025-02-13T19:34:40.527552286Z" level=info msg="StopPodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:40.527847 containerd[1495]: time="2025-02-13T19:34:40.527808074Z" level=info msg="RemovePodSandbox for \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:40.527847 containerd[1495]: time="2025-02-13T19:34:40.527835597Z" level=info msg="Forcibly stopping sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\"" Feb 13 19:34:40.528079 containerd[1495]: time="2025-02-13T19:34:40.527924747Z" level=info msg="TearDown network for sandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" successfully" Feb 13 19:34:40.549705 containerd[1495]: time="2025-02-13T19:34:40.549646866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.549891 containerd[1495]: time="2025-02-13T19:34:40.549741076Z" level=info msg="RemovePodSandbox \"3cd0556d144ead2cd0fe9b0e2548236139edbcde435f94ff76a2d8f11426cbb4\" returns successfully" Feb 13 19:34:40.550449 containerd[1495]: time="2025-02-13T19:34:40.550406226Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:40.550588 containerd[1495]: time="2025-02-13T19:34:40.550563005Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:40.550588 containerd[1495]: time="2025-02-13T19:34:40.550579227Z" level=info msg="StopPodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:40.550972 containerd[1495]: time="2025-02-13T19:34:40.550945195Z" level=info msg="RemovePodSandbox for \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:40.550972 containerd[1495]: time="2025-02-13T19:34:40.550978669Z" level=info msg="Forcibly stopping sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\"" Feb 13 19:34:40.551148 containerd[1495]: time="2025-02-13T19:34:40.551096674Z" level=info msg="TearDown network for sandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" successfully" Feb 13 19:34:40.556085 containerd[1495]: time="2025-02-13T19:34:40.556036264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.556260 containerd[1495]: time="2025-02-13T19:34:40.556114313Z" level=info msg="RemovePodSandbox \"cef5983e3770d34c85f3cd59dacbe0845a2eb8399021e993ef0075e8393ef657\" returns successfully" Feb 13 19:34:40.556735 containerd[1495]: time="2025-02-13T19:34:40.556695212Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:40.556861 containerd[1495]: time="2025-02-13T19:34:40.556829288Z" level=info msg="TearDown network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" successfully" Feb 13 19:34:40.556861 containerd[1495]: time="2025-02-13T19:34:40.556846440Z" level=info msg="StopPodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" returns successfully" Feb 13 19:34:40.557325 containerd[1495]: time="2025-02-13T19:34:40.557278936Z" level=info msg="RemovePodSandbox for \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:40.557325 containerd[1495]: time="2025-02-13T19:34:40.557301400Z" level=info msg="Forcibly stopping sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\"" Feb 13 19:34:40.557435 containerd[1495]: time="2025-02-13T19:34:40.557397292Z" level=info msg="TearDown network for sandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" successfully" Feb 13 19:34:40.568072 containerd[1495]: time="2025-02-13T19:34:40.567972241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.568258 containerd[1495]: time="2025-02-13T19:34:40.568103050Z" level=info msg="RemovePodSandbox \"8906fcccfefb471dd0086bb8e5bb18240ca67864592ebb7e55a6c8df30f78c4d\" returns successfully" Feb 13 19:34:40.568635 containerd[1495]: time="2025-02-13T19:34:40.568604708Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" Feb 13 19:34:40.568767 containerd[1495]: time="2025-02-13T19:34:40.568744996Z" level=info msg="TearDown network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" successfully" Feb 13 19:34:40.568767 containerd[1495]: time="2025-02-13T19:34:40.568763381Z" level=info msg="StopPodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" returns successfully" Feb 13 19:34:40.569156 containerd[1495]: time="2025-02-13T19:34:40.569122446Z" level=info msg="RemovePodSandbox for \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" Feb 13 19:34:40.569220 containerd[1495]: time="2025-02-13T19:34:40.569165037Z" level=info msg="Forcibly stopping sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\"" Feb 13 19:34:40.569342 containerd[1495]: time="2025-02-13T19:34:40.569290638Z" level=info msg="TearDown network for sandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" successfully" Feb 13 19:34:40.573501 containerd[1495]: time="2025-02-13T19:34:40.573472291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.573575 containerd[1495]: time="2025-02-13T19:34:40.573524260Z" level=info msg="RemovePodSandbox \"3de5ec605c5ea77aca35e70028c0c62cfe328747811dcd30271cfe035161dda9\" returns successfully" Feb 13 19:34:40.574003 containerd[1495]: time="2025-02-13T19:34:40.573958599Z" level=info msg="StopPodSandbox for \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\"" Feb 13 19:34:40.574083 containerd[1495]: time="2025-02-13T19:34:40.574066505Z" level=info msg="TearDown network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" successfully" Feb 13 19:34:40.574115 containerd[1495]: time="2025-02-13T19:34:40.574082665Z" level=info msg="StopPodSandbox for \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" returns successfully" Feb 13 19:34:40.574524 containerd[1495]: time="2025-02-13T19:34:40.574495113Z" level=info msg="RemovePodSandbox for \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\"" Feb 13 19:34:40.574568 containerd[1495]: time="2025-02-13T19:34:40.574532114Z" level=info msg="Forcibly stopping sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\"" Feb 13 19:34:40.574685 containerd[1495]: time="2025-02-13T19:34:40.574641011Z" level=info msg="TearDown network for sandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" successfully" Feb 13 19:34:40.579689 containerd[1495]: time="2025-02-13T19:34:40.579637961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.579740 containerd[1495]: time="2025-02-13T19:34:40.579724236Z" level=info msg="RemovePodSandbox \"6399ce03ad169717020e59b8182c63f2535fe0cdf0f696b779a96268d330ad5f\" returns successfully" Feb 13 19:34:40.580200 containerd[1495]: time="2025-02-13T19:34:40.580143466Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:40.580344 containerd[1495]: time="2025-02-13T19:34:40.580297620Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:40.580344 containerd[1495]: time="2025-02-13T19:34:40.580314914Z" level=info msg="StopPodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:40.580574 containerd[1495]: time="2025-02-13T19:34:40.580547889Z" level=info msg="RemovePodSandbox for \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:40.580602 containerd[1495]: time="2025-02-13T19:34:40.580580240Z" level=info msg="Forcibly stopping sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\"" Feb 13 19:34:40.580732 containerd[1495]: time="2025-02-13T19:34:40.580677506Z" level=info msg="TearDown network for sandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" successfully" Feb 13 19:34:40.585526 containerd[1495]: time="2025-02-13T19:34:40.585489643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.585618 containerd[1495]: time="2025-02-13T19:34:40.585542263Z" level=info msg="RemovePodSandbox \"fea2121252ced18d0e8ea818d4946ff71dae1f00ff27e5e123795e82c2f1ddd3\" returns successfully" Feb 13 19:34:40.585887 containerd[1495]: time="2025-02-13T19:34:40.585845883Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:40.585978 containerd[1495]: time="2025-02-13T19:34:40.585957296Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:40.586004 containerd[1495]: time="2025-02-13T19:34:40.585977323Z" level=info msg="StopPodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:40.586322 containerd[1495]: time="2025-02-13T19:34:40.586295862Z" level=info msg="RemovePodSandbox for \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:40.586360 containerd[1495]: time="2025-02-13T19:34:40.586325428Z" level=info msg="Forcibly stopping sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\"" Feb 13 19:34:40.586465 containerd[1495]: time="2025-02-13T19:34:40.586405201Z" level=info msg="TearDown network for sandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" successfully" Feb 13 19:34:40.590545 containerd[1495]: time="2025-02-13T19:34:40.590511539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.591065 containerd[1495]: time="2025-02-13T19:34:40.590558449Z" level=info msg="RemovePodSandbox \"d211670d201112826af6ed25dea4671404f6e550f8d0b57c1f679ed8dfbe8ead\" returns successfully" Feb 13 19:34:40.591065 containerd[1495]: time="2025-02-13T19:34:40.590811773Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:40.591065 containerd[1495]: time="2025-02-13T19:34:40.590893068Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:40.591065 containerd[1495]: time="2025-02-13T19:34:40.590902156Z" level=info msg="StopPodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:40.591169 containerd[1495]: time="2025-02-13T19:34:40.591139559Z" level=info msg="RemovePodSandbox for \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:40.591214 containerd[1495]: time="2025-02-13T19:34:40.591165058Z" level=info msg="Forcibly stopping sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\"" Feb 13 19:34:40.591324 containerd[1495]: time="2025-02-13T19:34:40.591271671Z" level=info msg="TearDown network for sandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" successfully" Feb 13 19:34:40.595805 containerd[1495]: time="2025-02-13T19:34:40.595751803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.595858 containerd[1495]: time="2025-02-13T19:34:40.595830404Z" level=info msg="RemovePodSandbox \"fc2dcb879548523e0167fcdaedbecd3bc415cf725d254739da7472c33adbf2de\" returns successfully" Feb 13 19:34:40.596282 containerd[1495]: time="2025-02-13T19:34:40.596254473Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:40.596404 containerd[1495]: time="2025-02-13T19:34:40.596378781Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:40.596437 containerd[1495]: time="2025-02-13T19:34:40.596400171Z" level=info msg="StopPodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:40.596707 containerd[1495]: time="2025-02-13T19:34:40.596663995Z" level=info msg="RemovePodSandbox for \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:40.596707 containerd[1495]: time="2025-02-13T19:34:40.596697950Z" level=info msg="Forcibly stopping sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\"" Feb 13 19:34:40.596860 containerd[1495]: time="2025-02-13T19:34:40.596799123Z" level=info msg="TearDown network for sandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" successfully" Feb 13 19:34:40.601459 containerd[1495]: time="2025-02-13T19:34:40.601411669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.601551 containerd[1495]: time="2025-02-13T19:34:40.601489998Z" level=info msg="RemovePodSandbox \"f5016f826d8e7442c641dd27df6db560be664736b0de8e9e27fb958eeb35964b\" returns successfully" Feb 13 19:34:40.602061 containerd[1495]: time="2025-02-13T19:34:40.601998469Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:40.602297 containerd[1495]: time="2025-02-13T19:34:40.602220413Z" level=info msg="TearDown network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" successfully" Feb 13 19:34:40.602350 containerd[1495]: time="2025-02-13T19:34:40.602294865Z" level=info msg="StopPodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" returns successfully" Feb 13 19:34:40.603232 containerd[1495]: time="2025-02-13T19:34:40.602661445Z" level=info msg="RemovePodSandbox for \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:40.603232 containerd[1495]: time="2025-02-13T19:34:40.602695229Z" level=info msg="Forcibly stopping sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\"" Feb 13 19:34:40.603232 containerd[1495]: time="2025-02-13T19:34:40.602800070Z" level=info msg="TearDown network for sandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" successfully" Feb 13 19:34:40.606951 containerd[1495]: time="2025-02-13T19:34:40.606910045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.606951 containerd[1495]: time="2025-02-13T19:34:40.606965040Z" level=info msg="RemovePodSandbox \"5dd0b29164b4bc4aec9fa98f3ef122288ffeb8f6b371401ce79172a2e607f95a\" returns successfully" Feb 13 19:34:40.607384 containerd[1495]: time="2025-02-13T19:34:40.607357169Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" Feb 13 19:34:40.607502 containerd[1495]: time="2025-02-13T19:34:40.607472459Z" level=info msg="TearDown network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" successfully" Feb 13 19:34:40.607502 containerd[1495]: time="2025-02-13T19:34:40.607491746Z" level=info msg="StopPodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" returns successfully" Feb 13 19:34:40.607873 containerd[1495]: time="2025-02-13T19:34:40.607804844Z" level=info msg="RemovePodSandbox for \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" Feb 13 19:34:40.607873 containerd[1495]: time="2025-02-13T19:34:40.607837837Z" level=info msg="Forcibly stopping sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\"" Feb 13 19:34:40.607950 containerd[1495]: time="2025-02-13T19:34:40.607910195Z" level=info msg="TearDown network for sandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" successfully" Feb 13 19:34:40.613125 containerd[1495]: time="2025-02-13T19:34:40.613060728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.613230 containerd[1495]: time="2025-02-13T19:34:40.613133347Z" level=info msg="RemovePodSandbox \"a9107c2a8622fcd564c95f0a84c0526bfa226e2cc7bd05602c6183fb976cc5b8\" returns successfully" Feb 13 19:34:40.613650 containerd[1495]: time="2025-02-13T19:34:40.613609185Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:40.613811 containerd[1495]: time="2025-02-13T19:34:40.613758230Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:40.613811 containerd[1495]: time="2025-02-13T19:34:40.613777636Z" level=info msg="StopPodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:40.614319 containerd[1495]: time="2025-02-13T19:34:40.614278883Z" level=info msg="RemovePodSandbox for \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:40.614319 containerd[1495]: time="2025-02-13T19:34:40.614306275Z" level=info msg="Forcibly stopping sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\"" Feb 13 19:34:40.614425 containerd[1495]: time="2025-02-13T19:34:40.614389174Z" level=info msg="TearDown network for sandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" successfully" Feb 13 19:34:40.618941 containerd[1495]: time="2025-02-13T19:34:40.618901338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.619041 containerd[1495]: time="2025-02-13T19:34:40.618961542Z" level=info msg="RemovePodSandbox \"3141058ea71a72c2d532201a3b0d90280f92b1bf7dde7851d2a62ae39ce08d63\" returns successfully" Feb 13 19:34:40.619389 containerd[1495]: time="2025-02-13T19:34:40.619346257Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:40.619455 containerd[1495]: time="2025-02-13T19:34:40.619440678Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:40.619455 containerd[1495]: time="2025-02-13T19:34:40.619451659Z" level=info msg="StopPodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:40.619949 containerd[1495]: time="2025-02-13T19:34:40.619783983Z" level=info msg="RemovePodSandbox for \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:40.619949 containerd[1495]: time="2025-02-13T19:34:40.619822566Z" level=info msg="Forcibly stopping sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\"" Feb 13 19:34:40.620074 containerd[1495]: time="2025-02-13T19:34:40.619931514Z" level=info msg="TearDown network for sandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" successfully" Feb 13 19:34:40.627553 containerd[1495]: time="2025-02-13T19:34:40.627479073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.627553 containerd[1495]: time="2025-02-13T19:34:40.627557724Z" level=info msg="RemovePodSandbox \"308692d7f0fd2b8c390981742a480b1ad945050256381bf6ec8b9e6b45b675df\" returns successfully" Feb 13 19:34:40.628288 containerd[1495]: time="2025-02-13T19:34:40.628233765Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:40.628516 containerd[1495]: time="2025-02-13T19:34:40.628412967Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:40.628516 containerd[1495]: time="2025-02-13T19:34:40.628499422Z" level=info msg="StopPodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:40.628854 containerd[1495]: time="2025-02-13T19:34:40.628816867Z" level=info msg="RemovePodSandbox for \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:40.628924 containerd[1495]: time="2025-02-13T19:34:40.628853408Z" level=info msg="Forcibly stopping sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\"" Feb 13 19:34:40.628967 containerd[1495]: time="2025-02-13T19:34:40.628939331Z" level=info msg="TearDown network for sandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" successfully" Feb 13 19:34:40.635577 containerd[1495]: time="2025-02-13T19:34:40.635320613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.635577 containerd[1495]: time="2025-02-13T19:34:40.635542397Z" level=info msg="RemovePodSandbox \"0ec6d50522b743b1da4e2d651e7c2432f896de3b70b8b442756b3faa620b722a\" returns successfully" Feb 13 19:34:40.636163 containerd[1495]: time="2025-02-13T19:34:40.636134306Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:40.636308 containerd[1495]: time="2025-02-13T19:34:40.636277770Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:40.636308 containerd[1495]: time="2025-02-13T19:34:40.636296416Z" level=info msg="StopPodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:40.636689 containerd[1495]: time="2025-02-13T19:34:40.636658958Z" level=info msg="RemovePodSandbox for \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:40.636760 containerd[1495]: time="2025-02-13T19:34:40.636692001Z" level=info msg="Forcibly stopping sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\"" Feb 13 19:34:40.636856 containerd[1495]: time="2025-02-13T19:34:40.636796181Z" level=info msg="TearDown network for sandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" successfully" Feb 13 19:34:40.641115 containerd[1495]: time="2025-02-13T19:34:40.641078977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.641207 containerd[1495]: time="2025-02-13T19:34:40.641128962Z" level=info msg="RemovePodSandbox \"810f1027770d950cee6e116b035b4af2422ea4af3485c63c51157a4cfd593ff5\" returns successfully" Feb 13 19:34:40.641535 containerd[1495]: time="2025-02-13T19:34:40.641507254Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:40.641644 containerd[1495]: time="2025-02-13T19:34:40.641629147Z" level=info msg="TearDown network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" successfully" Feb 13 19:34:40.641677 containerd[1495]: time="2025-02-13T19:34:40.641643334Z" level=info msg="StopPodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" returns successfully" Feb 13 19:34:40.642218 containerd[1495]: time="2025-02-13T19:34:40.641935432Z" level=info msg="RemovePodSandbox for \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:40.642218 containerd[1495]: time="2025-02-13T19:34:40.641966942Z" level=info msg="Forcibly stopping sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\"" Feb 13 19:34:40.642218 containerd[1495]: time="2025-02-13T19:34:40.642061463Z" level=info msg="TearDown network for sandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" successfully" Feb 13 19:34:40.647597 containerd[1495]: time="2025-02-13T19:34:40.647513962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.647597 containerd[1495]: time="2025-02-13T19:34:40.647565089Z" level=info msg="RemovePodSandbox \"9c932fe6f2d11a0088b436cedc3f31dd9183aa71bc279046510275d6bc8f1219\" returns successfully" Feb 13 19:34:40.648086 containerd[1495]: time="2025-02-13T19:34:40.648043012Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" Feb 13 19:34:40.648210 containerd[1495]: time="2025-02-13T19:34:40.648152321Z" level=info msg="TearDown network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" successfully" Feb 13 19:34:40.648210 containerd[1495]: time="2025-02-13T19:34:40.648169483Z" level=info msg="StopPodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" returns successfully" Feb 13 19:34:40.648591 containerd[1495]: time="2025-02-13T19:34:40.648560549Z" level=info msg="RemovePodSandbox for \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" Feb 13 19:34:40.648633 containerd[1495]: time="2025-02-13T19:34:40.648623830Z" level=info msg="Forcibly stopping sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\"" Feb 13 19:34:40.648775 containerd[1495]: time="2025-02-13T19:34:40.648728340Z" level=info msg="TearDown network for sandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" successfully" Feb 13 19:34:40.653445 containerd[1495]: time="2025-02-13T19:34:40.653390921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.653603 containerd[1495]: time="2025-02-13T19:34:40.653460173Z" level=info msg="RemovePodSandbox \"3dad3e84f84e7f143a2af923c6999a2fb48d24e3f4383ede2a906cfe25765371\" returns successfully" Feb 13 19:34:40.653932 containerd[1495]: time="2025-02-13T19:34:40.653896687Z" level=info msg="StopPodSandbox for \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\"" Feb 13 19:34:40.654073 containerd[1495]: time="2025-02-13T19:34:40.654052023Z" level=info msg="TearDown network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" successfully" Feb 13 19:34:40.654073 containerd[1495]: time="2025-02-13T19:34:40.654069356Z" level=info msg="StopPodSandbox for \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" returns successfully" Feb 13 19:34:40.654452 containerd[1495]: time="2025-02-13T19:34:40.654426057Z" level=info msg="RemovePodSandbox for \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\"" Feb 13 19:34:40.654452 containerd[1495]: time="2025-02-13T19:34:40.654452758Z" level=info msg="Forcibly stopping sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\"" Feb 13 19:34:40.654573 containerd[1495]: time="2025-02-13T19:34:40.654548311Z" level=info msg="TearDown network for sandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" successfully" Feb 13 19:34:40.660564 containerd[1495]: time="2025-02-13T19:34:40.660517536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.660667 containerd[1495]: time="2025-02-13T19:34:40.660587070Z" level=info msg="RemovePodSandbox \"78f60a83594a862a1cf0fba38d44be302bb2bf8e71db4ba9765b06b378d42870\" returns successfully" Feb 13 19:34:40.661234 containerd[1495]: time="2025-02-13T19:34:40.661124636Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:40.661339 containerd[1495]: time="2025-02-13T19:34:40.661260414Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:40.661339 containerd[1495]: time="2025-02-13T19:34:40.661274762Z" level=info msg="StopPodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:40.661580 containerd[1495]: time="2025-02-13T19:34:40.661546972Z" level=info msg="RemovePodSandbox for \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:40.661580 containerd[1495]: time="2025-02-13T19:34:40.661577881Z" level=info msg="Forcibly stopping sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\"" Feb 13 19:34:40.661699 containerd[1495]: time="2025-02-13T19:34:40.661653315Z" level=info msg="TearDown network for sandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" successfully" Feb 13 19:34:40.666329 containerd[1495]: time="2025-02-13T19:34:40.666258236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.666329 containerd[1495]: time="2025-02-13T19:34:40.666312239Z" level=info msg="RemovePodSandbox \"2dff6c91b17d4ebb0c88e72e427a2da2af606c9966331d5904e621fbb185574e\" returns successfully" Feb 13 19:34:40.666883 containerd[1495]: time="2025-02-13T19:34:40.666841119Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:40.667028 containerd[1495]: time="2025-02-13T19:34:40.666997968Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:40.667061 containerd[1495]: time="2025-02-13T19:34:40.667017415Z" level=info msg="StopPodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:40.667516 containerd[1495]: time="2025-02-13T19:34:40.667450602Z" level=info msg="RemovePodSandbox for \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:40.667516 containerd[1495]: time="2025-02-13T19:34:40.667479787Z" level=info msg="Forcibly stopping sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\"" Feb 13 19:34:40.667736 containerd[1495]: time="2025-02-13T19:34:40.667558698Z" level=info msg="TearDown network for sandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" successfully" Feb 13 19:34:40.671593 containerd[1495]: time="2025-02-13T19:34:40.671530681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.671784 containerd[1495]: time="2025-02-13T19:34:40.671752976Z" level=info msg="RemovePodSandbox \"8dad948d25aa0c928df3c796968e52157a865fe2e300474d6de32c1ef08e9391\" returns successfully" Feb 13 19:34:40.672253 containerd[1495]: time="2025-02-13T19:34:40.672222663Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:40.672397 containerd[1495]: time="2025-02-13T19:34:40.672329165Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:40.672397 containerd[1495]: time="2025-02-13T19:34:40.672391835Z" level=info msg="StopPodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:40.672853 containerd[1495]: time="2025-02-13T19:34:40.672808400Z" level=info msg="RemovePodSandbox for \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:40.672913 containerd[1495]: time="2025-02-13T19:34:40.672862704Z" level=info msg="Forcibly stopping sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\"" Feb 13 19:34:40.673063 containerd[1495]: time="2025-02-13T19:34:40.672985278Z" level=info msg="TearDown network for sandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" successfully" Feb 13 19:34:40.678033 containerd[1495]: time="2025-02-13T19:34:40.677970936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.678093 containerd[1495]: time="2025-02-13T19:34:40.678050037Z" level=info msg="RemovePodSandbox \"2cce2b20655065aaad5000f2d2f186a4c2d01f9414630e6a4e453114bd67b947\" returns successfully" Feb 13 19:34:40.678710 containerd[1495]: time="2025-02-13T19:34:40.678494155Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:40.678710 containerd[1495]: time="2025-02-13T19:34:40.678619294Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:40.678710 containerd[1495]: time="2025-02-13T19:34:40.678632870Z" level=info msg="StopPodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:40.679065 containerd[1495]: time="2025-02-13T19:34:40.679017504Z" level=info msg="RemovePodSandbox for \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:40.679120 containerd[1495]: time="2025-02-13T19:34:40.679066568Z" level=info msg="Forcibly stopping sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\"" Feb 13 19:34:40.679249 containerd[1495]: time="2025-02-13T19:34:40.679160177Z" level=info msg="TearDown network for sandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" successfully" Feb 13 19:34:40.684957 containerd[1495]: time="2025-02-13T19:34:40.684900085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.685035 containerd[1495]: time="2025-02-13T19:34:40.684984175Z" level=info msg="RemovePodSandbox \"3c3a888f86a2188776fd92d6124e796aa00f1b98c899a2b7561079f4bc7c6b97\" returns successfully" Feb 13 19:34:40.685529 containerd[1495]: time="2025-02-13T19:34:40.685496152Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:40.685660 containerd[1495]: time="2025-02-13T19:34:40.685634166Z" level=info msg="TearDown network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" successfully" Feb 13 19:34:40.685660 containerd[1495]: time="2025-02-13T19:34:40.685652250Z" level=info msg="StopPodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" returns successfully" Feb 13 19:34:40.685994 containerd[1495]: time="2025-02-13T19:34:40.685967863Z" level=info msg="RemovePodSandbox for \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:40.686072 containerd[1495]: time="2025-02-13T19:34:40.685996878Z" level=info msg="Forcibly stopping sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\"" Feb 13 19:34:40.686107 containerd[1495]: time="2025-02-13T19:34:40.686088824Z" level=info msg="TearDown network for sandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" successfully" Feb 13 19:34:40.691314 containerd[1495]: time="2025-02-13T19:34:40.691259115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.691363 containerd[1495]: time="2025-02-13T19:34:40.691324940Z" level=info msg="RemovePodSandbox \"b39e38a93b0c8045971402acc661d969d72b24754c43187b10d0d57ec9c8a6ec\" returns successfully" Feb 13 19:34:40.691728 containerd[1495]: time="2025-02-13T19:34:40.691700818Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" Feb 13 19:34:40.691829 containerd[1495]: time="2025-02-13T19:34:40.691804616Z" level=info msg="TearDown network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" successfully" Feb 13 19:34:40.691829 containerd[1495]: time="2025-02-13T19:34:40.691821699Z" level=info msg="StopPodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" returns successfully" Feb 13 19:34:40.692248 containerd[1495]: time="2025-02-13T19:34:40.692211683Z" level=info msg="RemovePodSandbox for \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" Feb 13 19:34:40.692311 containerd[1495]: time="2025-02-13T19:34:40.692255546Z" level=info msg="Forcibly stopping sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\"" Feb 13 19:34:40.692440 containerd[1495]: time="2025-02-13T19:34:40.692414480Z" level=info msg="TearDown network for sandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" successfully" Feb 13 19:34:40.697269 containerd[1495]: time="2025-02-13T19:34:40.697185999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.697424 containerd[1495]: time="2025-02-13T19:34:40.697289787Z" level=info msg="RemovePodSandbox \"0a6b222259e41f16aa5d6e0e2769ae03a8dc10db01bcddbfd4a7c69022252c00\" returns successfully" Feb 13 19:34:40.697767 containerd[1495]: time="2025-02-13T19:34:40.697745918Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:40.697868 containerd[1495]: time="2025-02-13T19:34:40.697850257Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:40.697896 containerd[1495]: time="2025-02-13T19:34:40.697866939Z" level=info msg="StopPodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:40.698169 containerd[1495]: time="2025-02-13T19:34:40.698141854Z" level=info msg="RemovePodSandbox for \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:40.698246 containerd[1495]: time="2025-02-13T19:34:40.698168184Z" level=info msg="Forcibly stopping sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\"" Feb 13 19:34:40.698350 containerd[1495]: time="2025-02-13T19:34:40.698290788Z" level=info msg="TearDown network for sandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" successfully" Feb 13 19:34:40.702337 containerd[1495]: time="2025-02-13T19:34:40.702299592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.702419 containerd[1495]: time="2025-02-13T19:34:40.702370166Z" level=info msg="RemovePodSandbox \"37a53554b4e11106f55bf264d35e2ac0c7fbb167bec99785e8ecf4053073b8be\" returns successfully" Feb 13 19:34:40.702699 containerd[1495]: time="2025-02-13T19:34:40.702678645Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:40.702848 containerd[1495]: time="2025-02-13T19:34:40.702767604Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:40.702848 containerd[1495]: time="2025-02-13T19:34:40.702789316Z" level=info msg="StopPodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:40.703086 containerd[1495]: time="2025-02-13T19:34:40.703062918Z" level=info msg="RemovePodSandbox for \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:40.703133 containerd[1495]: time="2025-02-13T19:34:40.703090761Z" level=info msg="Forcibly stopping sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\"" Feb 13 19:34:40.703218 containerd[1495]: time="2025-02-13T19:34:40.703166857Z" level=info msg="TearDown network for sandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" successfully" Feb 13 19:34:40.707037 containerd[1495]: time="2025-02-13T19:34:40.706997690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.707115 containerd[1495]: time="2025-02-13T19:34:40.707061472Z" level=info msg="RemovePodSandbox \"ee1417dac12b9fdee7f8c6285d264fcc6b169ce7492ed8d12dc01597db7b96a0\" returns successfully" Feb 13 19:34:40.707445 containerd[1495]: time="2025-02-13T19:34:40.707385731Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:40.707514 containerd[1495]: time="2025-02-13T19:34:40.707497474Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:40.707551 containerd[1495]: time="2025-02-13T19:34:40.707510610Z" level=info msg="StopPodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:40.707780 containerd[1495]: time="2025-02-13T19:34:40.707753173Z" level=info msg="RemovePodSandbox for \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:40.707824 containerd[1495]: time="2025-02-13T19:34:40.707778541Z" level=info msg="Forcibly stopping sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\"" Feb 13 19:34:40.707899 containerd[1495]: time="2025-02-13T19:34:40.707864816Z" level=info msg="TearDown network for sandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" successfully" Feb 13 19:34:40.712101 containerd[1495]: time="2025-02-13T19:34:40.712046830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.712182 containerd[1495]: time="2025-02-13T19:34:40.712106644Z" level=info msg="RemovePodSandbox \"6c742bf39e2ea7b04e206bafde123ef055b7e144075d20d68e3133ecfe9c3d99\" returns successfully" Feb 13 19:34:40.712435 containerd[1495]: time="2025-02-13T19:34:40.712400415Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:40.712512 containerd[1495]: time="2025-02-13T19:34:40.712493823Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:40.712512 containerd[1495]: time="2025-02-13T19:34:40.712506537Z" level=info msg="StopPodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:40.712861 containerd[1495]: time="2025-02-13T19:34:40.712836788Z" level=info msg="RemovePodSandbox for \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:40.712914 containerd[1495]: time="2025-02-13T19:34:40.712872005Z" level=info msg="Forcibly stopping sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\"" Feb 13 19:34:40.712998 containerd[1495]: time="2025-02-13T19:34:40.712958701Z" level=info msg="TearDown network for sandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" successfully" Feb 13 19:34:40.717437 containerd[1495]: time="2025-02-13T19:34:40.717383037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.717537 containerd[1495]: time="2025-02-13T19:34:40.717459693Z" level=info msg="RemovePodSandbox \"8b53a6b809374192e53a6b51bc2b6759e8f0fd31cd26b6dfa1bc9ea84be07417\" returns successfully" Feb 13 19:34:40.717876 containerd[1495]: time="2025-02-13T19:34:40.717849898Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:40.717979 containerd[1495]: time="2025-02-13T19:34:40.717957052Z" level=info msg="TearDown network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" successfully" Feb 13 19:34:40.717979 containerd[1495]: time="2025-02-13T19:34:40.717974305Z" level=info msg="StopPodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" returns successfully" Feb 13 19:34:40.718503 containerd[1495]: time="2025-02-13T19:34:40.718465984Z" level=info msg="RemovePodSandbox for \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:40.718576 containerd[1495]: time="2025-02-13T19:34:40.718508976Z" level=info msg="Forcibly stopping sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\"" Feb 13 19:34:40.718625 containerd[1495]: time="2025-02-13T19:34:40.718613196Z" level=info msg="TearDown network for sandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" successfully" Feb 13 19:34:40.723534 containerd[1495]: time="2025-02-13T19:34:40.723484845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.723596 containerd[1495]: time="2025-02-13T19:34:40.723547204Z" level=info msg="RemovePodSandbox \"23d001cf61e7393c4a953d7573b0be9581e740b1429caa622a87320c2769ee0a\" returns successfully" Feb 13 19:34:40.723965 containerd[1495]: time="2025-02-13T19:34:40.723931267Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" Feb 13 19:34:40.724124 containerd[1495]: time="2025-02-13T19:34:40.724081875Z" level=info msg="TearDown network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" successfully" Feb 13 19:34:40.724124 containerd[1495]: time="2025-02-13T19:34:40.724095001Z" level=info msg="StopPodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" returns successfully" Feb 13 19:34:40.724357 containerd[1495]: time="2025-02-13T19:34:40.724338736Z" level=info msg="RemovePodSandbox for \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" Feb 13 19:34:40.724394 containerd[1495]: time="2025-02-13T19:34:40.724362331Z" level=info msg="Forcibly stopping sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\"" Feb 13 19:34:40.724458 containerd[1495]: time="2025-02-13T19:34:40.724424720Z" level=info msg="TearDown network for sandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" successfully" Feb 13 19:34:40.729007 containerd[1495]: time="2025-02-13T19:34:40.728965799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:34:40.729373 containerd[1495]: time="2025-02-13T19:34:40.729338871Z" level=info msg="RemovePodSandbox \"6647169439a67bdf8898973fd67e229253b024f46823a489f3d62a56fe4c56d5\" returns successfully" Feb 13 19:34:41.083076 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:36350.service - OpenSSH per-connection server daemon (10.0.0.1:36350). Feb 13 19:34:41.143870 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:41.145863 sshd-session[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:41.151168 systemd-logind[1480]: New session 18 of user core. Feb 13 19:34:41.156357 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:34:41.300517 sshd[6312]: Connection closed by 10.0.0.1 port 36350 Feb 13 19:34:41.300900 sshd-session[6310]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:41.305482 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:36350.service: Deactivated successfully. Feb 13 19:34:41.308306 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:34:41.309061 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:34:41.310104 systemd-logind[1480]: Removed session 18. Feb 13 19:34:46.316306 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:56888.service - OpenSSH per-connection server daemon (10.0.0.1:56888). Feb 13 19:34:46.451537 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 56888 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:46.453659 sshd-session[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:46.460642 systemd-logind[1480]: New session 19 of user core. Feb 13 19:34:46.475470 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:34:46.607753 sshd[6329]: Connection closed by 10.0.0.1 port 56888 Feb 13 19:34:46.608386 sshd-session[6327]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:46.618595 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:56888.service: Deactivated successfully. Feb 13 19:34:46.621017 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:34:46.623185 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:34:46.627495 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:56890.service - OpenSSH per-connection server daemon (10.0.0.1:56890). Feb 13 19:34:46.628536 systemd-logind[1480]: Removed session 19. Feb 13 19:34:46.672598 sshd[6341]: Accepted publickey for core from 10.0.0.1 port 56890 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:46.674637 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:46.680618 systemd-logind[1480]: New session 20 of user core. Feb 13 19:34:46.690333 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:34:47.630029 sshd[6343]: Connection closed by 10.0.0.1 port 56890 Feb 13 19:34:47.630493 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:47.641419 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:56890.service: Deactivated successfully. Feb 13 19:34:47.643491 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:34:47.645321 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:34:47.654568 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:56904.service - OpenSSH per-connection server daemon (10.0.0.1:56904). Feb 13 19:34:47.655737 systemd-logind[1480]: Removed session 20. Feb 13 19:34:47.699499 sshd[6354]: Accepted publickey for core from 10.0.0.1 port 56904 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:47.701248 sshd-session[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:47.706117 systemd-logind[1480]: New session 21 of user core. Feb 13 19:34:47.715443 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:34:48.136174 kubelet[2640]: E0213 19:34:48.136140 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:48.154669 kubelet[2640]: I0213 19:34:48.154598 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mrdz6" podStartSLOduration=41.076458663 podStartE2EDuration="53.154577595s" podCreationTimestamp="2025-02-13 19:33:55 +0000 UTC" firstStartedPulling="2025-02-13 19:34:18.032006918 +0000 UTC m=+38.165323703" lastFinishedPulling="2025-02-13 19:34:30.11012585 +0000 UTC m=+50.243442635" observedRunningTime="2025-02-13 19:34:31.170906738 +0000 UTC m=+51.304223514" watchObservedRunningTime="2025-02-13 19:34:48.154577595 +0000 UTC m=+68.287894380" Feb 13 19:34:48.773184 sshd[6356]: Connection closed by 10.0.0.1 port 56904 Feb 13 19:34:48.774378 sshd-session[6354]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:48.785267 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:56904.service: Deactivated successfully. Feb 13 19:34:48.787794 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:34:48.789367 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:34:48.799318 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:56910.service - OpenSSH per-connection server daemon (10.0.0.1:56910). Feb 13 19:34:48.801860 systemd-logind[1480]: Removed session 21. Feb 13 19:34:48.851176 sshd[6412]: Accepted publickey for core from 10.0.0.1 port 56910 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:48.852912 sshd-session[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:48.857533 systemd-logind[1480]: New session 22 of user core. Feb 13 19:34:48.869400 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:34:49.271482 sshd[6414]: Connection closed by 10.0.0.1 port 56910 Feb 13 19:34:49.272429 sshd-session[6412]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:49.284547 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:56910.service: Deactivated successfully. Feb 13 19:34:49.286665 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:34:49.288671 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:34:49.295482 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:56916.service - OpenSSH per-connection server daemon (10.0.0.1:56916). Feb 13 19:34:49.296524 systemd-logind[1480]: Removed session 22. Feb 13 19:34:49.334181 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 56916 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:49.335778 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:49.340441 systemd-logind[1480]: New session 23 of user core. Feb 13 19:34:49.346356 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:34:49.464327 sshd[6426]: Connection closed by 10.0.0.1 port 56916 Feb 13 19:34:49.464651 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:49.468237 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:56916.service: Deactivated successfully. Feb 13 19:34:49.470230 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:34:49.470836 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:34:49.471686 systemd-logind[1480]: Removed session 23. Feb 13 19:34:54.475626 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:43980.service - OpenSSH per-connection server daemon (10.0.0.1:43980). Feb 13 19:34:54.520814 sshd[6460]: Accepted publickey for core from 10.0.0.1 port 43980 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:54.522556 sshd-session[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:54.526635 systemd-logind[1480]: New session 24 of user core. Feb 13 19:34:54.540325 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:34:54.661389 sshd[6462]: Connection closed by 10.0.0.1 port 43980 Feb 13 19:34:54.661719 sshd-session[6460]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:54.665348 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:43980.service: Deactivated successfully. Feb 13 19:34:54.667549 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:34:54.668391 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:34:54.669347 systemd-logind[1480]: Removed session 24. Feb 13 19:34:54.989352 kubelet[2640]: E0213 19:34:54.989301 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:55.212970 kubelet[2640]: I0213 19:34:55.212915 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:58.988318 kubelet[2640]: E0213 19:34:58.988255 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:59.684456 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:43984.service - OpenSSH per-connection server daemon (10.0.0.1:43984). Feb 13 19:34:59.733299 sshd[6487]: Accepted publickey for core from 10.0.0.1 port 43984 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:59.735269 sshd-session[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:59.740111 systemd-logind[1480]: New session 25 of user core. Feb 13 19:34:59.749488 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:34:59.887184 sshd[6489]: Connection closed by 10.0.0.1 port 43984 Feb 13 19:34:59.887659 sshd-session[6487]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:59.892450 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:43984.service: Deactivated successfully. Feb 13 19:34:59.895140 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:34:59.895872 systemd-logind[1480]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:34:59.896839 systemd-logind[1480]: Removed session 25. Feb 13 19:34:59.989410 kubelet[2640]: E0213 19:34:59.989250 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:04.901286 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Feb 13 19:35:04.946558 sshd[6502]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:35:04.948233 sshd-session[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:04.952908 systemd-logind[1480]: New session 26 of user core. Feb 13 19:35:04.967378 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:35:05.130343 sshd[6504]: Connection closed by 10.0.0.1 port 55134 Feb 13 19:35:05.130870 sshd-session[6502]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:05.134746 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:55134.service: Deactivated successfully. Feb 13 19:35:05.136991 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:35:05.137708 systemd-logind[1480]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:35:05.138738 systemd-logind[1480]: Removed session 26. Feb 13 19:35:10.154529 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:55136.service - OpenSSH per-connection server daemon (10.0.0.1:55136). Feb 13 19:35:10.199296 sshd[6516]: Accepted publickey for core from 10.0.0.1 port 55136 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:35:10.201267 sshd-session[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:10.205840 systemd-logind[1480]: New session 27 of user core. Feb 13 19:35:10.211333 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:35:10.390401 sshd[6518]: Connection closed by 10.0.0.1 port 55136 Feb 13 19:35:10.390781 sshd-session[6516]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:10.394760 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:55136.service: Deactivated successfully. Feb 13 19:35:10.397641 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:35:10.398426 systemd-logind[1480]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:35:10.399599 systemd-logind[1480]: Removed session 27.