Jan 13 20:45:30.912326 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:45:30.912352 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:30.912367 kernel: BIOS-provided physical RAM map: Jan 13 20:45:30.912376 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:45:30.912395 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:45:30.912403 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:45:30.912413 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:45:30.912422 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:45:30.912439 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:45:30.912461 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:45:30.912488 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 13 20:45:30.912504 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:45:30.912513 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:45:30.912522 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:45:30.912532 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:45:30.912542 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:45:30.912554 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:45:30.912563 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:45:30.912576 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:45:30.912585 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:45:30.912595 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:45:30.912604 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:45:30.912613 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:45:30.912622 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:45:30.912631 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:45:30.912640 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:45:30.912650 kernel: NX (Execute Disable) protection: active Jan 13 20:45:30.912662 kernel: APIC: Static calls initialized Jan 13 20:45:30.912671 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:45:30.912681 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:45:30.912690 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:45:30.912698 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:45:30.912707 kernel: extended physical RAM map: Jan 13 20:45:30.912716 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:45:30.912726 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:45:30.912735 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:45:30.912744 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:45:30.912753 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:45:30.912765 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:45:30.912774 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:45:30.912788 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 13 20:45:30.912798 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 13 20:45:30.912807 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 13 20:45:30.912817 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 13 20:45:30.912827 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 13 20:45:30.912840 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:45:30.912849 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:45:30.912859 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:45:30.912869 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:45:30.912952 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:45:30.912962 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:45:30.912971 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:45:30.912980 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:45:30.912990 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:45:30.913003 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:45:30.913013 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:45:30.913023 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:45:30.913032 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:45:30.913042 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:45:30.913052 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:45:30.913062 kernel: efi: EFI v2.7 by EDK II Jan 13 20:45:30.913071 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 13 20:45:30.913081 kernel: random: crng init done Jan 13 20:45:30.913091 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 13 20:45:30.913101 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 13 20:45:30.913113 kernel: secureboot: Secure boot disabled Jan 13 20:45:30.913123 kernel: SMBIOS 2.8 present. Jan 13 20:45:30.913132 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 13 20:45:30.913142 kernel: Hypervisor detected: KVM Jan 13 20:45:30.913152 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:45:30.913162 kernel: kvm-clock: using sched offset of 2806747067 cycles Jan 13 20:45:30.913172 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:45:30.913182 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:45:30.913192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:45:30.913203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:45:30.913213 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 13 20:45:30.913226 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 20:45:30.913236 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:45:30.913245 kernel: Using GB pages for direct mapping Jan 13 20:45:30.913255 kernel: ACPI: Early table checksum verification disabled Jan 13 20:45:30.913265 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 20:45:30.913276 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:45:30.913286 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913296 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913306 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 20:45:30.913318 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913352 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913363 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913373 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:45:30.913392 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:45:30.913403 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 20:45:30.913416 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 20:45:30.913426 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 20:45:30.913440 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 20:45:30.913450 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 20:45:30.913460 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 20:45:30.913470 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 20:45:30.913480 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 20:45:30.913490 kernel: No NUMA configuration found Jan 13 20:45:30.913500 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 13 20:45:30.913510 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 13 20:45:30.913520 kernel: Zone ranges: Jan 13 20:45:30.913530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:45:30.913543 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 13 20:45:30.913553 kernel: Normal empty Jan 13 20:45:30.913563 kernel: Movable zone start for each node Jan 13 20:45:30.913572 kernel: Early memory node ranges Jan 13 20:45:30.913582 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 20:45:30.913592 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 20:45:30.913602 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 20:45:30.913612 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 13 20:45:30.913622 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 13 20:45:30.913634 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 13 20:45:30.913644 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 13 20:45:30.913654 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 13 20:45:30.913664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 13 20:45:30.913674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:45:30.913685 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 20:45:30.913708 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 20:45:30.913723 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:45:30.913734 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 13 20:45:30.913747 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 13 20:45:30.913757 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:45:30.913767 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 13 20:45:30.913781 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 13 20:45:30.913791 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:45:30.913802 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:45:30.913813 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:45:30.913824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:45:30.913837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:45:30.913848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:45:30.913859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:45:30.913869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:45:30.913894 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:45:30.913905 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:45:30.913915 kernel: TSC deadline timer available Jan 13 20:45:30.913926 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:45:30.913951 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:45:30.913965 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:45:30.913976 kernel: kvm-guest: setup PV sched yield Jan 13 20:45:30.913986 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 13 20:45:30.913997 kernel: Booting paravirtualized kernel on KVM Jan 13 20:45:30.914008 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:45:30.914019 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:45:30.914030 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:45:30.914040 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:45:30.914051 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:45:30.914063 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:45:30.914074 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:45:30.914086 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:30.914097 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:45:30.914108 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:45:30.914118 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:45:30.914129 kernel: Fallback order for Node 0: 0 Jan 13 20:45:30.914139 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 13 20:45:30.914150 kernel: Policy zone: DMA32 Jan 13 20:45:30.914163 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:45:30.914174 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Jan 13 20:45:30.914184 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:45:30.914195 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:45:30.914205 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:45:30.914215 kernel: Dynamic Preempt: voluntary Jan 13 20:45:30.914226 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:45:30.914241 kernel: rcu: RCU event tracing is enabled. Jan 13 20:45:30.914252 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:45:30.914266 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:45:30.914276 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:45:30.914287 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:45:30.914297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:45:30.914307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:45:30.914318 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:45:30.914328 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:45:30.914339 kernel: Console: colour dummy device 80x25 Jan 13 20:45:30.914349 kernel: printk: console [ttyS0] enabled Jan 13 20:45:30.914362 kernel: ACPI: Core revision 20230628 Jan 13 20:45:30.914372 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:45:30.914393 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:45:30.914404 kernel: x2apic enabled Jan 13 20:45:30.914414 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:45:30.914425 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:45:30.914436 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:45:30.914446 kernel: kvm-guest: setup PV IPIs Jan 13 20:45:30.914456 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:45:30.914469 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:45:30.914480 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:45:30.914490 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:45:30.914501 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:45:30.914511 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:45:30.914522 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:45:30.914532 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:45:30.914543 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:45:30.914553 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:45:30.914566 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:45:30.914576 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:45:30.914587 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:45:30.914597 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:45:30.914608 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:45:30.914619 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:45:30.914629 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:45:30.914640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:45:30.914653 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:45:30.914663 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:45:30.914673 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:45:30.914684 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:45:30.914694 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:45:30.914704 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:45:30.914715 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:45:30.914725 kernel: landlock: Up and running. Jan 13 20:45:30.914735 kernel: SELinux: Initializing. Jan 13 20:45:30.914748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:45:30.914758 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:45:30.914769 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:45:30.914779 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:30.914790 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:30.914800 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:45:30.914811 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:45:30.914821 kernel: ... version: 0 Jan 13 20:45:30.914831 kernel: ... bit width: 48 Jan 13 20:45:30.914844 kernel: ... generic registers: 6 Jan 13 20:45:30.914854 kernel: ... value mask: 0000ffffffffffff Jan 13 20:45:30.914865 kernel: ... max period: 00007fffffffffff Jan 13 20:45:30.914936 kernel: ... fixed-purpose events: 0 Jan 13 20:45:30.914947 kernel: ... event mask: 000000000000003f Jan 13 20:45:30.914957 kernel: signal: max sigframe size: 1776 Jan 13 20:45:30.914968 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:45:30.914979 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:45:30.914989 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:45:30.915003 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:45:30.915013 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:45:30.915023 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:45:30.915034 kernel: smpboot: Max logical packages: 1 Jan 13 20:45:30.915044 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:45:30.915054 kernel: devtmpfs: initialized Jan 13 20:45:30.915065 kernel: x86/mm: Memory block size: 128MB Jan 13 20:45:30.915075 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 20:45:30.915086 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 20:45:30.915099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 13 20:45:30.915109 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 20:45:30.915120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 13 20:45:30.915130 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 20:45:30.915141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:45:30.915151 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:45:30.915162 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:45:30.915172 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:45:30.915182 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:45:30.915196 kernel: audit: type=2000 audit(1736801130.047:1): state=initialized audit_enabled=0 res=1 Jan 13 20:45:30.915206 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:45:30.915217 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:45:30.915226 kernel: cpuidle: using governor menu Jan 13 20:45:30.915236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:45:30.915246 kernel: dca service started, version 1.12.1 Jan 13 20:45:30.915256 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:45:30.915265 kernel: PCI: Using configuration type 1 for base access Jan 13 20:45:30.915275 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:45:30.915288 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:45:30.915298 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:45:30.915309 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:45:30.915319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:45:30.915329 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:45:30.915339 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:45:30.915349 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:45:30.915360 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:45:30.915370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:45:30.915394 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:45:30.915404 kernel: ACPI: Interpreter enabled Jan 13 20:45:30.915414 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:45:30.915424 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:45:30.915435 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:45:30.915445 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:45:30.915455 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:45:30.915465 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:45:30.915686 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:45:30.915849 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:45:30.916021 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:45:30.916037 kernel: PCI host bridge to bus 0000:00 Jan 13 20:45:30.916191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:45:30.916327 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:45:30.916470 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:45:30.916605 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 13 20:45:30.916732 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 13 20:45:30.916862 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:45:30.917018 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:45:30.917162 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:45:30.917292 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:45:30.917430 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 20:45:30.917549 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 20:45:30.917667 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 20:45:30.917785 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 20:45:30.917928 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:45:30.918061 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:45:30.918197 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 20:45:30.918362 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 20:45:30.918530 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 13 20:45:30.918699 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:45:30.918858 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 20:45:30.919046 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 20:45:30.919205 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 13 20:45:30.919375 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:45:30.919578 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 20:45:30.919722 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 20:45:30.919855 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 13 20:45:30.920011 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 20:45:30.920149 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:45:30.920271 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:45:30.920409 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:45:30.920536 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 20:45:30.920657 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 20:45:30.920786 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:45:30.920934 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 20:45:30.920946 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:45:30.920954 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:45:30.920961 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:45:30.920973 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:45:30.920980 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:45:30.920988 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:45:30.920995 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:45:30.921003 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:45:30.921010 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:45:30.921018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:45:30.921028 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:45:30.921038 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:45:30.921051 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:45:30.921061 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:45:30.921071 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:45:30.921082 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:45:30.921092 kernel: iommu: Default domain type: Translated Jan 13 20:45:30.921103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:45:30.921114 kernel: efivars: Registered efivars operations Jan 13 20:45:30.921124 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:45:30.921135 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:45:30.921148 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 20:45:30.921159 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 13 20:45:30.921169 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 13 20:45:30.921179 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 13 20:45:30.921190 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 13 20:45:30.921200 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 13 20:45:30.921211 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 13 20:45:30.921221 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 13 20:45:30.921388 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:45:30.921552 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:45:30.921707 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:45:30.921722 kernel: vgaarb: loaded Jan 13 20:45:30.921733 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:45:30.921744 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:45:30.921755 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:45:30.921765 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:45:30.921776 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:45:30.921790 kernel: pnp: PnP ACPI init Jan 13 20:45:30.922022 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 13 20:45:30.922040 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:45:30.922052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:45:30.922062 kernel: NET: Registered PF_INET protocol family Jan 13 20:45:30.922097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:45:30.922111 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:45:30.922122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:45:30.922136 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:45:30.922147 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:45:30.922158 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:45:30.922169 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:45:30.922180 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:45:30.922191 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:45:30.922202 kernel: NET: Registered PF_XDP protocol family Jan 13 20:45:30.922362 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 20:45:30.922535 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 20:45:30.922661 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:45:30.922770 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:45:30.922898 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:45:30.923012 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 13 20:45:30.923122 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 13 20:45:30.923263 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:45:30.923279 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:45:30.923290 kernel: Initialise system trusted keyrings Jan 13 20:45:30.923306 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:45:30.923317 kernel: Key type asymmetric registered Jan 13 20:45:30.923327 kernel: Asymmetric key parser 'x509' registered Jan 13 20:45:30.923338 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:45:30.923348 kernel: io scheduler mq-deadline registered Jan 13 20:45:30.923359 kernel: io scheduler kyber registered Jan 13 20:45:30.923370 kernel: io scheduler bfq registered Jan 13 20:45:30.923391 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:45:30.923402 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:45:30.923417 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:45:30.923431 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:45:30.923444 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:45:30.923455 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:45:30.923466 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:45:30.923480 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:45:30.923491 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:45:30.923656 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:45:30.923805 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:45:30.923822 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 20:45:30.923982 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:45:30 UTC (1736801130) Jan 13 20:45:30.924128 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:45:30.924144 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:45:30.924159 kernel: efifb: probing for efifb Jan 13 20:45:30.924170 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 13 20:45:30.924181 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 13 20:45:30.924192 kernel: efifb: scrolling: redraw Jan 13 20:45:30.924202 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 13 20:45:30.924213 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:45:30.924224 kernel: fb0: EFI VGA frame buffer device Jan 13 20:45:30.924235 kernel: pstore: Using crash dump compression: deflate Jan 13 20:45:30.924246 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:45:30.924260 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:45:30.924271 kernel: Segment Routing with IPv6 Jan 13 20:45:30.924282 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:45:30.924293 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:45:30.924303 kernel: Key type dns_resolver registered Jan 13 20:45:30.924314 kernel: IPI shorthand broadcast: enabled Jan 13 20:45:30.924325 kernel: sched_clock: Marking stable (624002981, 231108178)->(871262257, -16151098) Jan 13 20:45:30.924336 kernel: registered taskstats version 1 Jan 13 20:45:30.924346 kernel: Loading compiled-in X.509 certificates Jan 13 20:45:30.924358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:45:30.924372 kernel: Key type .fscrypt registered Jan 13 20:45:30.924392 kernel: Key type fscrypt-provisioning registered Jan 13 20:45:30.924403 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:45:30.924413 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:45:30.924424 kernel: ima: No architecture policies found Jan 13 20:45:30.924435 kernel: clk: Disabling unused clocks Jan 13 20:45:30.924446 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:45:30.924457 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:45:30.924471 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:45:30.924482 kernel: Run /init as init process Jan 13 20:45:30.924492 kernel: with arguments: Jan 13 20:45:30.924503 kernel: /init Jan 13 20:45:30.924514 kernel: with environment: Jan 13 20:45:30.924525 kernel: HOME=/ Jan 13 20:45:30.924535 kernel: TERM=linux Jan 13 20:45:30.924546 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:45:30.924559 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:30.924577 systemd[1]: Detected virtualization kvm. Jan 13 20:45:30.924588 systemd[1]: Detected architecture x86-64. Jan 13 20:45:30.924600 systemd[1]: Running in initrd. Jan 13 20:45:30.924611 systemd[1]: No hostname configured, using default hostname. Jan 13 20:45:30.924622 systemd[1]: Hostname set to . Jan 13 20:45:30.924633 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:30.924645 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:45:30.924660 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:30.924672 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:30.924684 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:45:30.924696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:30.924708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:45:30.924719 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:45:30.924733 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:45:30.924748 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:45:30.924760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:30.924772 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:30.924783 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:30.924795 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:30.924806 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:30.924818 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:30.924830 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:30.924841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:30.924857 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:30.924882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:30.924895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:30.924906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:30.924918 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:30.924929 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:30.924941 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:45:30.924952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:30.924967 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:45:30.924978 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:45:30.924990 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:30.925001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:30.925013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:30.925024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:30.925036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:30.925047 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:45:30.925063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:30.925097 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 20:45:30.925127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:30.925139 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:30.925150 systemd-journald[193]: Journal started Jan 13 20:45:30.925174 systemd-journald[193]: Runtime Journal (/run/log/journal/3dac65473e6547a780e7b454bc0a9b3d) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:45:30.931164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:30.911362 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 20:45:30.934308 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:30.938405 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:30.942069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:30.947898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:45:30.948180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:30.951482 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 20:45:30.951900 kernel: Bridge firewalling registered Jan 13 20:45:30.953218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:30.954640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:30.963416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:30.965823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:45:30.968818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:30.970822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:30.987538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:30.998914 dracut-cmdline[225]: dracut-dracut-053 Jan 13 20:45:31.002974 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:31.021161 systemd-resolved[232]: Positive Trust Anchors: Jan 13 20:45:31.021175 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:31.021205 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:31.023615 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 13 20:45:31.024630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:31.031411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:31.107906 kernel: SCSI subsystem initialized Jan 13 20:45:31.116891 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:45:31.152904 kernel: iscsi: registered transport (tcp) Jan 13 20:45:31.172981 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:45:31.173019 kernel: QLogic iSCSI HBA Driver Jan 13 20:45:31.225578 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:31.237043 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:45:31.263132 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:45:31.263170 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:45:31.264445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:45:31.350913 kernel: raid6: avx2x4 gen() 29998 MB/s Jan 13 20:45:31.382894 kernel: raid6: avx2x2 gen() 30231 MB/s Jan 13 20:45:31.406201 kernel: raid6: avx2x1 gen() 25712 MB/s Jan 13 20:45:31.406226 kernel: raid6: using algorithm avx2x2 gen() 30231 MB/s Jan 13 20:45:31.424010 kernel: raid6: .... xor() 19715 MB/s, rmw enabled Jan 13 20:45:31.424085 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:45:31.444906 kernel: xor: automatically using best checksumming function avx Jan 13 20:45:31.598908 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:45:31.611848 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:31.619169 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:31.631249 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 20:45:31.636839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:31.642058 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:45:31.658081 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 13 20:45:31.690153 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:31.699036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:31.767571 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:31.779051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:45:31.793959 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:31.796224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:31.799158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:31.800689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:31.807994 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:45:31.831247 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:45:31.831269 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:45:31.831475 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:45:31.831491 kernel: GPT:9289727 != 19775487 Jan 13 20:45:31.831501 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:45:31.831511 kernel: GPT:9289727 != 19775487 Jan 13 20:45:31.831521 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:45:31.831531 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:31.831541 kernel: libata version 3.00 loaded. Jan 13 20:45:31.813044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:45:31.829467 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:31.838902 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:45:31.840891 kernel: AES CTR mode by8 optimization enabled Jan 13 20:45:31.849979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:31.858794 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:45:31.881375 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:45:31.881401 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:45:31.881558 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:45:31.881697 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (469) Jan 13 20:45:31.881709 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 13 20:45:31.881719 kernel: scsi host0: ahci Jan 13 20:45:31.882089 kernel: scsi host1: ahci Jan 13 20:45:31.882253 kernel: scsi host2: ahci Jan 13 20:45:31.882579 kernel: scsi host3: ahci Jan 13 20:45:31.882722 kernel: scsi host4: ahci Jan 13 20:45:31.882881 kernel: scsi host5: ahci Jan 13 20:45:31.883027 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 20:45:31.883039 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 20:45:31.883049 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 20:45:31.883059 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 20:45:31.883073 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 20:45:31.883084 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 20:45:31.850242 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:31.855287 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:31.859645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:31.860311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:31.861868 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:31.873181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:31.885672 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:45:31.895160 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:45:31.904251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:45:31.904349 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:45:31.913926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:45:31.929019 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:45:31.930223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:31.930280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:31.932867 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:31.935980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:31.948493 disk-uuid[570]: Primary Header is updated. Jan 13 20:45:31.948493 disk-uuid[570]: Secondary Entries is updated. Jan 13 20:45:31.948493 disk-uuid[570]: Secondary Header is updated. Jan 13 20:45:31.952504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:31.951964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:31.957888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:31.963551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:31.988059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:32.188143 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:32.188213 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:32.188238 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:32.189900 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:45:32.189983 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:45:32.190486 kernel: ata3.00: applying bridge limits Jan 13 20:45:32.191908 kernel: ata3.00: configured for UDMA/100 Jan 13 20:45:32.191929 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:32.192902 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:45:32.193908 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:45:32.236443 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:45:32.248517 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:45:32.248533 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:45:32.959742 disk-uuid[572]: The operation has completed successfully. Jan 13 20:45:32.961242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:45:32.990479 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:45:32.990620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:45:33.019060 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:45:33.024016 sh[598]: Success Jan 13 20:45:33.036908 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:45:33.070454 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:45:33.080397 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:45:33.083323 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:45:33.095584 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:45:33.095622 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:33.095639 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:45:33.096590 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:45:33.097315 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:45:33.102546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:45:33.103463 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:45:33.115043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:45:33.117298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:45:33.126851 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:33.126888 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:33.126904 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:33.130889 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:33.140368 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:45:33.142095 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:33.152911 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:45:33.161067 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:45:33.216172 ignition[692]: Ignition 2.20.0 Jan 13 20:45:33.216186 ignition[692]: Stage: fetch-offline Jan 13 20:45:33.216235 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:33.216248 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:33.216373 ignition[692]: parsed url from cmdline: "" Jan 13 20:45:33.216378 ignition[692]: no config URL provided Jan 13 20:45:33.216385 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:45:33.216397 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:45:33.216432 ignition[692]: op(1): [started] loading QEMU firmware config module Jan 13 20:45:33.216439 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:45:33.226863 ignition[692]: op(1): [finished] loading QEMU firmware config module Jan 13 20:45:33.242152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:33.256059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:33.271686 ignition[692]: parsing config with SHA512: 89ca833b4c9f6679ea5dc6cfa1385317ebadcfcba12d039c8e4dd9bb4ec03008a70aa9c84eec71244c62de15425ca9c38284c2d9be2a72b27941e7412fd7b1eb Jan 13 20:45:33.276845 unknown[692]: fetched base config from "system" Jan 13 20:45:33.276865 unknown[692]: fetched user config from "qemu" Jan 13 20:45:33.277602 ignition[692]: fetch-offline: fetch-offline passed Jan 13 20:45:33.277137 systemd-networkd[786]: lo: Link UP Jan 13 20:45:33.277771 ignition[692]: Ignition finished successfully Jan 13 20:45:33.277143 systemd-networkd[786]: lo: Gained carrier Jan 13 20:45:33.278979 systemd-networkd[786]: Enumeration completed Jan 13 20:45:33.279509 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:33.279542 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:33.279547 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:33.280954 systemd-networkd[786]: eth0: Link UP Jan 13 20:45:33.280958 systemd-networkd[786]: eth0: Gained carrier Jan 13 20:45:33.280965 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:33.282415 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:33.285290 systemd[1]: Reached target network.target - Network. Jan 13 20:45:33.286465 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:45:33.294089 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:45:33.294308 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:45:33.308117 ignition[789]: Ignition 2.20.0 Jan 13 20:45:33.308132 ignition[789]: Stage: kargs Jan 13 20:45:33.308302 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:33.308314 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:33.309121 ignition[789]: kargs: kargs passed Jan 13 20:45:33.309167 ignition[789]: Ignition finished successfully Jan 13 20:45:33.312429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:45:33.322136 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:45:33.333921 ignition[798]: Ignition 2.20.0 Jan 13 20:45:33.333935 ignition[798]: Stage: disks Jan 13 20:45:33.334124 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:33.334137 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:33.337006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:45:33.335183 ignition[798]: disks: disks passed Jan 13 20:45:33.339773 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:33.335242 ignition[798]: Ignition finished successfully Jan 13 20:45:33.341341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:33.342770 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:33.344629 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:33.346959 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:33.360317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:45:33.372395 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:45:33.378993 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:45:33.390985 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:45:33.477892 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:45:33.478065 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:45:33.480421 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:33.493025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:33.495438 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:45:33.497034 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:45:33.503192 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Jan 13 20:45:33.503221 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:33.503235 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:33.497089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:45:33.509060 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:33.509081 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:33.497118 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:33.505559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:45:33.510116 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:33.512957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:45:33.550477 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:45:33.554968 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:45:33.559333 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:45:33.563447 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:45:33.655005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:33.661066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:45:33.664377 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:45:33.669903 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:33.690959 ignition[931]: INFO : Ignition 2.20.0 Jan 13 20:45:33.690959 ignition[931]: INFO : Stage: mount Jan 13 20:45:33.692918 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:33.692918 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:33.692918 ignition[931]: INFO : mount: mount passed Jan 13 20:45:33.692918 ignition[931]: INFO : Ignition finished successfully Jan 13 20:45:33.694486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:45:33.700594 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:45:33.712007 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:45:34.095135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:45:34.104083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:34.113497 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Jan 13 20:45:34.113566 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:34.113583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:34.114564 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:45:34.117901 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:45:34.120059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:34.139624 ignition[962]: INFO : Ignition 2.20.0 Jan 13 20:45:34.139624 ignition[962]: INFO : Stage: files Jan 13 20:45:34.142093 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:34.142093 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:34.142093 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:45:34.142093 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:45:34.142093 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:45:34.149768 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:45:34.149768 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:45:34.149768 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:45:34.149768 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:34.149768 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:45:34.145532 unknown[962]: wrote ssh authorized keys file for user: core Jan 13 20:45:34.177072 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:45:34.291279 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:34.291279 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:34.295193 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:34.295193 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:34.298842 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:34.300576 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:34.302331 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:34.304248 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:34.306390 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:34.308906 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:34.311223 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:34.313350 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:45:34.316492 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:45:34.319457 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:45:34.322224 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:45:34.693468 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:45:35.026980 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:45:35.026980 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:45:35.030733 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:45:35.056265 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:45:35.061427 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:45:35.063011 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:45:35.063011 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:35.063011 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:35.063011 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:35.063011 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:35.063011 ignition[962]: INFO : files: files passed Jan 13 20:45:35.063011 ignition[962]: INFO : Ignition finished successfully Jan 13 20:45:35.064122 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:45:35.075109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:45:35.078261 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:45:35.080214 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:45:35.080354 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:45:35.088551 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:45:35.091611 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:35.091611 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:35.096351 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:35.094510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:35.096511 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:45:35.099705 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:45:35.132394 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:45:35.132525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:45:35.133057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:45:35.133430 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:45:35.133844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:45:35.134803 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:45:35.149131 systemd-networkd[786]: eth0: Gained IPv6LL Jan 13 20:45:35.157113 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:35.173076 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:45:35.183648 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:35.185071 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:35.187511 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:45:35.189761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:45:35.189938 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:35.192604 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:45:35.194383 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:45:35.196698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:45:35.302861 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:35.304952 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:35.307136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:45:35.309251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:35.311570 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:45:35.313560 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:45:35.315822 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:45:35.317624 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:45:35.317777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:35.320240 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:35.321720 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:35.323800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:45:35.323945 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:35.326052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:45:35.326159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:35.328405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:45:35.328512 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:35.330519 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:45:35.332407 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:45:35.336008 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:35.338389 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:45:35.340483 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:45:35.342535 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:45:35.342651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:35.344841 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:45:35.344999 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:35.348388 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:45:35.348563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:35.350781 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:45:35.350939 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:45:35.361032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:45:35.362662 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:45:35.363814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:45:35.363999 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:35.366478 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:45:35.366790 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:35.373036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:45:35.373395 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:45:35.377909 ignition[1017]: INFO : Ignition 2.20.0 Jan 13 20:45:35.377909 ignition[1017]: INFO : Stage: umount Jan 13 20:45:35.377909 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:35.377909 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:45:35.377909 ignition[1017]: INFO : umount: umount passed Jan 13 20:45:35.377909 ignition[1017]: INFO : Ignition finished successfully Jan 13 20:45:35.378130 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:45:35.378252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:45:35.379795 systemd[1]: Stopped target network.target - Network. Jan 13 20:45:35.381723 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:45:35.381794 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:45:35.384228 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:45:35.384304 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:45:35.384540 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:45:35.384584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:45:35.384907 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:45:35.384970 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:35.385537 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:45:35.392244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:45:35.396969 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 13 20:45:35.399475 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:45:35.399680 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:45:35.402155 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:45:35.402214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:35.410021 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:45:35.411919 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:45:35.411985 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:35.414567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:35.418993 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:45:35.419691 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:45:35.419838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:45:35.433136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:45:35.433206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:35.434513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:45:35.434561 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:35.436948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:45:35.436993 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:35.439976 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:45:35.440149 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:35.442693 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:45:35.442801 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:45:35.445362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:45:35.445433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:35.447957 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:45:35.448014 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:35.450154 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:45:35.450220 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:35.452855 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:45:35.452935 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:35.455040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:35.455102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:35.462022 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:45:35.463950 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:45:35.464018 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:35.466472 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:45:35.466533 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:35.468961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:45:35.469021 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:35.471522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:35.471581 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:35.474377 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:45:35.474506 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:45:35.560763 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:45:35.560939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:45:35.562169 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:45:35.564664 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:45:35.564719 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:35.577987 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:45:35.586631 systemd[1]: Switching root. Jan 13 20:45:35.622607 systemd-journald[193]: Journal stopped Jan 13 20:45:36.715464 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 20:45:36.715528 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:45:36.715542 kernel: SELinux: policy capability open_perms=1 Jan 13 20:45:36.715554 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:45:36.715571 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:45:36.715586 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:45:36.715597 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:45:36.715608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:45:36.715619 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:45:36.715631 kernel: audit: type=1403 audit(1736801135.969:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:45:36.715643 systemd[1]: Successfully loaded SELinux policy in 41.001ms. Jan 13 20:45:36.715667 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.678ms. Jan 13 20:45:36.715680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:36.715692 systemd[1]: Detected virtualization kvm. Jan 13 20:45:36.715706 systemd[1]: Detected architecture x86-64. Jan 13 20:45:36.715718 systemd[1]: Detected first boot. Jan 13 20:45:36.715729 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:36.715742 zram_generator::config[1064]: No configuration found. Jan 13 20:45:36.715754 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:45:36.715766 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:45:36.715779 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:45:36.715795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:36.715814 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:45:36.715834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:45:36.715848 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:45:36.715865 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:45:36.715895 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:45:36.715911 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:45:36.715926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:45:36.715941 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:45:36.715959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:36.715974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:36.715988 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:45:36.716003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:45:36.716017 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:45:36.716031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:36.716045 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:45:36.716060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:36.716074 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:45:36.716090 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:45:36.716105 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:36.716119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:45:36.716134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:36.716148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:36.716168 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:36.716182 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:36.716198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:45:36.716215 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:45:36.716230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:36.716259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:36.716274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:36.716288 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:45:36.716302 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:45:36.716317 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:45:36.716331 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:45:36.716345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:36.716362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:45:36.716381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:45:36.716395 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:45:36.716410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:45:36.716431 systemd[1]: Reached target machines.target - Containers. Jan 13 20:45:36.716446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:45:36.716461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:36.716475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:36.716489 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:45:36.716506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:36.716520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:36.716536 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:36.716550 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:45:36.716565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:36.716579 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:45:36.716594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:45:36.716610 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:45:36.716627 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:45:36.716641 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:45:36.716655 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:36.716669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:36.716683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:45:36.716697 kernel: loop: module loaded Jan 13 20:45:36.716710 kernel: fuse: init (API version 7.39) Jan 13 20:45:36.716724 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:45:36.716738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:36.716755 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:45:36.716769 systemd[1]: Stopped verity-setup.service. Jan 13 20:45:36.716784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:36.716818 systemd-journald[1128]: Collecting audit messages is disabled. Jan 13 20:45:36.716846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:45:36.716860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:45:36.716891 systemd-journald[1128]: Journal started Jan 13 20:45:36.716918 systemd-journald[1128]: Runtime Journal (/run/log/journal/3dac65473e6547a780e7b454bc0a9b3d) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:45:36.492885 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:45:36.515962 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:45:36.516450 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:45:36.721053 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:36.721936 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:45:36.723293 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:45:36.724533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:45:36.725791 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:45:36.731173 kernel: ACPI: bus type drm_connector registered Jan 13 20:45:36.727160 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:36.728996 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:45:36.729193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:45:36.731367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:36.731538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:36.733492 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:36.733668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:36.735119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:36.735291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:36.737024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:45:36.737185 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:45:36.738780 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:45:36.740308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:36.740497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:36.742020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:36.743596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:45:36.745267 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:45:36.760473 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:45:36.766994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:45:36.769617 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:45:36.770942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:45:36.770974 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:36.773054 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:45:36.775457 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:45:36.778593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:45:36.779971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:36.783608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:45:36.787759 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:45:36.789067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:36.792056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:45:36.793336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:36.796380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:36.801300 systemd-journald[1128]: Time spent on flushing to /var/log/journal/3dac65473e6547a780e7b454bc0a9b3d is 17.633ms for 1042 entries. Jan 13 20:45:36.801300 systemd-journald[1128]: System Journal (/var/log/journal/3dac65473e6547a780e7b454bc0a9b3d) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:45:36.832313 systemd-journald[1128]: Received client request to flush runtime journal. Jan 13 20:45:36.802652 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:45:36.817809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:36.821145 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:45:36.822680 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:45:36.824915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:45:36.826817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:45:36.831228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:36.834946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:36.837212 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:45:36.839901 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 20:45:36.845426 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:45:36.854136 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:45:36.857730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:45:36.862587 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 13 20:45:36.862612 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 13 20:45:36.870603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:36.873923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:45:36.885190 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:45:36.887353 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:45:36.888372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:45:36.891143 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:45:36.902902 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:45:36.915111 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:45:36.926073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:36.942889 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:45:36.945990 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 13 20:45:36.946010 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 13 20:45:36.952196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:36.976899 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 20:45:36.985913 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:45:36.996076 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:45:37.007979 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:45:37.008985 (sd-merge)[1205]: Merged extensions into '/usr'. Jan 13 20:45:37.012937 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:45:37.012951 systemd[1]: Reloading... Jan 13 20:45:37.074163 zram_generator::config[1227]: No configuration found. Jan 13 20:45:37.135009 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:45:37.194567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:37.244411 systemd[1]: Reloading finished in 230 ms. Jan 13 20:45:37.275611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:45:37.277118 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:45:37.293206 systemd[1]: Starting ensure-sysext.service... Jan 13 20:45:37.296100 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:37.300964 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:45:37.300981 systemd[1]: Reloading... Jan 13 20:45:37.326511 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:45:37.327015 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:45:37.328741 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:45:37.329170 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 13 20:45:37.329285 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 13 20:45:37.337673 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:37.338074 systemd-tmpfiles[1269]: Skipping /boot Jan 13 20:45:37.356906 zram_generator::config[1296]: No configuration found. Jan 13 20:45:37.358201 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:37.360095 systemd-tmpfiles[1269]: Skipping /boot Jan 13 20:45:37.470568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:37.526040 systemd[1]: Reloading finished in 224 ms. Jan 13 20:45:37.545362 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:45:37.557620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:37.567475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:37.570585 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:45:37.573199 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:45:37.578189 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:37.581365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:37.583946 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:45:37.587821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.588257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:37.592139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:37.595046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:37.597541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:37.598913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:37.599015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.609745 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:45:37.611624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:37.611832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:37.613890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:37.614089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:37.616238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:37.616455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:37.621393 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:45:37.626277 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Jan 13 20:45:37.626835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:45:37.630316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.630763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:37.637545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:37.642334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:37.648193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:37.649525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:37.654977 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:45:37.655908 augenrules[1371]: No rules Jan 13 20:45:37.656331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.657556 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:37.657796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:37.659741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:37.659920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:37.661585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:37.661860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:37.663551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:37.665909 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:45:37.668082 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:37.668670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:37.670583 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:45:37.690665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.699264 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:37.701036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:37.704758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:37.720910 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Jan 13 20:45:37.721180 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:37.723713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:37.731101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:37.732386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:37.734532 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:37.736962 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:37.738172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:45:37.739970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:37.740162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:37.742158 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:37.742341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:37.753306 augenrules[1404]: /sbin/augenrules: No change Jan 13 20:45:37.759394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:37.759817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:37.761703 systemd[1]: Finished ensure-sysext.service. Jan 13 20:45:37.765538 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:45:37.770508 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:37.770738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:37.776345 augenrules[1438]: No rules Jan 13 20:45:37.778296 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:37.778581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:37.785241 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:37.785334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:37.797180 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:45:37.797306 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:45:37.801181 systemd-resolved[1337]: Positive Trust Anchors: Jan 13 20:45:37.801558 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:37.801593 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:37.809740 systemd-resolved[1337]: Defaulting to hostname 'linux'. Jan 13 20:45:37.813245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:37.814598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:37.817910 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:45:37.822860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:45:37.829902 systemd-networkd[1418]: lo: Link UP Jan 13 20:45:37.830061 systemd-networkd[1418]: lo: Gained carrier Jan 13 20:45:37.832461 systemd-networkd[1418]: Enumeration completed Jan 13 20:45:37.833230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:45:37.836922 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:37.836935 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:37.837719 systemd-networkd[1418]: eth0: Link UP Jan 13 20:45:37.837732 systemd-networkd[1418]: eth0: Gained carrier Jan 13 20:45:37.837745 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:37.838577 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:37.840523 systemd[1]: Reached target network.target - Network. Jan 13 20:45:37.847300 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 20:45:37.849853 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:45:37.849903 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:45:37.850130 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:45:37.850365 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:45:37.849040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:45:37.852724 systemd-networkd[1418]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:45:37.861897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:45:37.862592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:45:37.887960 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:45:37.889233 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:45:37.889326 systemd-timesyncd[1448]: Initial clock synchronization to Mon 2025-01-13 20:45:38.166676 UTC. Jan 13 20:45:37.889683 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:45:37.911906 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:45:37.916253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:37.923897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:37.924229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:37.984303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:37.995055 kernel: kvm_amd: TSC scaling supported Jan 13 20:45:37.995118 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:45:37.995137 kernel: kvm_amd: Nested Paging enabled Jan 13 20:45:37.995150 kernel: kvm_amd: LBR virtualization supported Jan 13 20:45:37.996127 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:45:37.996150 kernel: kvm_amd: Virtual GIF supported Jan 13 20:45:38.018924 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:45:38.051365 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:45:38.063109 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:45:38.064830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:38.078471 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:38.114377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:45:38.116222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:38.117454 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:38.118736 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:45:38.120116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:45:38.121737 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:45:38.123120 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:45:38.124976 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:45:38.126372 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:45:38.126406 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:38.127395 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:38.129048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:45:38.131953 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:45:38.145273 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:45:38.148536 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:45:38.150629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:45:38.152138 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:38.153448 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:38.154729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:38.154770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:38.156343 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:45:38.159069 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:38.160156 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:45:38.164992 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:45:38.168141 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:45:38.169240 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:45:38.173107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:45:38.176325 jq[1476]: false Jan 13 20:45:38.177167 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:45:38.182126 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:45:38.186153 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:45:38.192682 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:45:38.193334 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:45:38.193850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:45:38.195485 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:45:38.201097 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:45:38.202105 extend-filesystems[1477]: Found loop3 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found loop4 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found loop5 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found sr0 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found vda Jan 13 20:45:38.202105 extend-filesystems[1477]: Found vda1 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found vda2 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found vda3 Jan 13 20:45:38.202105 extend-filesystems[1477]: Found usr Jan 13 20:45:38.202105 extend-filesystems[1477]: Found vda4 Jan 13 20:45:38.216181 extend-filesystems[1477]: Found vda6 Jan 13 20:45:38.216181 extend-filesystems[1477]: Found vda7 Jan 13 20:45:38.216181 extend-filesystems[1477]: Found vda9 Jan 13 20:45:38.216181 extend-filesystems[1477]: Checking size of /dev/vda9 Jan 13 20:45:38.211855 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:45:38.205369 dbus-daemon[1475]: [system] SELinux support is enabled Jan 13 20:45:38.229184 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:45:38.229213 extend-filesystems[1477]: Resized partition /dev/vda9 Jan 13 20:45:38.218974 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:45:38.233569 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:45:38.229471 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:45:38.235408 jq[1490]: true Jan 13 20:45:38.229760 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:45:38.235679 update_engine[1489]: I20250113 20:45:38.232473 1489 main.cc:92] Flatcar Update Engine starting Jan 13 20:45:38.235679 update_engine[1489]: I20250113 20:45:38.235215 1489 update_check_scheduler.cc:74] Next update check in 5m24s Jan 13 20:45:38.230217 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:45:38.230468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:45:38.237007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:45:38.237313 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:45:38.241129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1400) Jan 13 20:45:38.257046 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:45:38.263490 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:45:38.272382 jq[1501]: true Jan 13 20:45:38.278175 tar[1499]: linux-amd64/helm Jan 13 20:45:38.280610 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:45:38.280610 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:45:38.280610 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:45:38.284856 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Jan 13 20:45:38.284822 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:45:38.286375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:45:38.300303 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:45:38.301958 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:45:38.301985 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:45:38.303423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:45:38.303439 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:45:38.314091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:45:38.332487 systemd-logind[1488]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:45:38.332520 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:45:38.333407 systemd-logind[1488]: New seat seat0. Jan 13 20:45:38.335169 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:45:38.354508 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:38.357184 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:45:38.359341 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:45:38.371715 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:45:38.480914 containerd[1502]: time="2025-01-13T20:45:38.480809355Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:45:38.497272 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:45:38.512795 containerd[1502]: time="2025-01-13T20:45:38.512741058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515196 containerd[1502]: time="2025-01-13T20:45:38.515126876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515196 containerd[1502]: time="2025-01-13T20:45:38.515179120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:45:38.515270 containerd[1502]: time="2025-01-13T20:45:38.515202904Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:45:38.515469 containerd[1502]: time="2025-01-13T20:45:38.515433486Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:45:38.515469 containerd[1502]: time="2025-01-13T20:45:38.515464496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515603 containerd[1502]: time="2025-01-13T20:45:38.515570871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515603 containerd[1502]: time="2025-01-13T20:45:38.515599175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515895 containerd[1502]: time="2025-01-13T20:45:38.515857502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515895 containerd[1502]: time="2025-01-13T20:45:38.515885733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515959 containerd[1502]: time="2025-01-13T20:45:38.515928688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:38.515959 containerd[1502]: time="2025-01-13T20:45:38.515944623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.516110 containerd[1502]: time="2025-01-13T20:45:38.516072925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.516488 containerd[1502]: time="2025-01-13T20:45:38.516394683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:38.517187 containerd[1502]: time="2025-01-13T20:45:38.516577739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:38.517187 containerd[1502]: time="2025-01-13T20:45:38.516605805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:45:38.517187 containerd[1502]: time="2025-01-13T20:45:38.516748083Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:45:38.517187 containerd[1502]: time="2025-01-13T20:45:38.516825542Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:45:38.523322 containerd[1502]: time="2025-01-13T20:45:38.523284314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:45:38.523322 containerd[1502]: time="2025-01-13T20:45:38.523325848Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:45:38.523444 containerd[1502]: time="2025-01-13T20:45:38.523340404Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:45:38.523444 containerd[1502]: time="2025-01-13T20:45:38.523357346Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:45:38.523444 containerd[1502]: time="2025-01-13T20:45:38.523370534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:45:38.523570 containerd[1502]: time="2025-01-13T20:45:38.523499002Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:45:38.523794 containerd[1502]: time="2025-01-13T20:45:38.523771274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:45:38.523957 containerd[1502]: time="2025-01-13T20:45:38.523915045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:45:38.523957 containerd[1502]: time="2025-01-13T20:45:38.523938704Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:45:38.524003 containerd[1502]: time="2025-01-13T20:45:38.523971985Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:45:38.524003 containerd[1502]: time="2025-01-13T20:45:38.523986231Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524003 containerd[1502]: time="2025-01-13T20:45:38.523999636Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524066 containerd[1502]: time="2025-01-13T20:45:38.524012130Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524066 containerd[1502]: time="2025-01-13T20:45:38.524026313Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524066 containerd[1502]: time="2025-01-13T20:45:38.524040196Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524066 containerd[1502]: time="2025-01-13T20:45:38.524053820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524066 containerd[1502]: time="2025-01-13T20:45:38.524066841Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524160 containerd[1502]: time="2025-01-13T20:45:38.524080382Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:45:38.524160 containerd[1502]: time="2025-01-13T20:45:38.524101159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.524160 containerd[1502]: time="2025-01-13T20:45:38.524115384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526476 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525386508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525537361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525564484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525590869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525613835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525637329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525664627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525696238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525717887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525740976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525774671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525796009Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525842364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525865847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.526680 containerd[1502]: time="2025-01-13T20:45:38.525916702Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.525995674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526027027Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526046155Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526067804Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526086113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526109077Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526123147Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:45:38.527511 containerd[1502]: time="2025-01-13T20:45:38.526142473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.526741083Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.526976590Z" level=info msg="Connect containerd service" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.527016134Z" level=info msg="using legacy CRI server" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.527026440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.527177811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528007408Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528187996Z" level=info msg="Start subscribing containerd event" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528274205Z" level=info msg="Start recovering state" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528417884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528627947Z" level=info msg="Start event monitor" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528648465Z" level=info msg="Start snapshots syncer" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528657994Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528659372Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528708827Z" level=info msg="Start streaming server" Jan 13 20:45:38.529548 containerd[1502]: time="2025-01-13T20:45:38.528777328Z" level=info msg="containerd successfully booted in 0.049226s" Jan 13 20:45:38.536127 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:45:38.537283 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:45:38.544723 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:45:38.545104 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:45:38.550107 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:45:38.574115 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:45:38.585240 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:45:38.588481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:45:38.590223 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:45:38.680478 tar[1499]: linux-amd64/LICENSE Jan 13 20:45:38.680660 tar[1499]: linux-amd64/README.md Jan 13 20:45:38.694626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:45:38.869506 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:45:38.872224 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:56106.service - OpenSSH per-connection server daemon (10.0.0.1:56106). Jan 13 20:45:38.931485 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 56106 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:38.933900 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:38.943184 systemd-logind[1488]: New session 1 of user core. Jan 13 20:45:38.944680 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:45:38.957155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:45:38.969641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:45:38.973941 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:45:38.984736 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:45:39.055034 systemd-networkd[1418]: eth0: Gained IPv6LL Jan 13 20:45:39.057723 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:45:39.060376 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:45:39.069153 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:45:39.072080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:39.074888 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:45:39.099362 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:45:39.099629 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:45:39.101497 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:45:39.102098 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:45:39.141236 systemd[1570]: Queued start job for default target default.target. Jan 13 20:45:39.164343 systemd[1570]: Created slice app.slice - User Application Slice. Jan 13 20:45:39.164376 systemd[1570]: Reached target paths.target - Paths. Jan 13 20:45:39.164394 systemd[1570]: Reached target timers.target - Timers. Jan 13 20:45:39.166095 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:45:39.181158 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:45:39.181299 systemd[1570]: Reached target sockets.target - Sockets. Jan 13 20:45:39.181314 systemd[1570]: Reached target basic.target - Basic System. Jan 13 20:45:39.181348 systemd[1570]: Reached target default.target - Main User Target. Jan 13 20:45:39.181382 systemd[1570]: Startup finished in 188ms. Jan 13 20:45:39.181965 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:45:39.193053 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:45:39.257409 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:56108.service - OpenSSH per-connection server daemon (10.0.0.1:56108). Jan 13 20:45:39.301340 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:39.303260 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:39.309759 systemd-logind[1488]: New session 2 of user core. Jan 13 20:45:39.318102 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:45:39.375988 sshd[1600]: Connection closed by 10.0.0.1 port 56108 Jan 13 20:45:39.377027 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:39.383565 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:56108.service: Deactivated successfully. Jan 13 20:45:39.384973 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:45:39.386346 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:45:39.387441 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:56112.service - OpenSSH per-connection server daemon (10.0.0.1:56112). Jan 13 20:45:39.389722 systemd-logind[1488]: Removed session 2. Jan 13 20:45:39.438114 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 56112 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:39.439492 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:39.443386 systemd-logind[1488]: New session 3 of user core. Jan 13 20:45:39.450035 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:45:39.506526 sshd[1607]: Connection closed by 10.0.0.1 port 56112 Jan 13 20:45:39.506875 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:39.510210 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:56112.service: Deactivated successfully. Jan 13 20:45:39.512122 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:45:39.512710 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:45:39.513688 systemd-logind[1488]: Removed session 3. Jan 13 20:45:39.771349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:39.773481 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:45:39.775116 systemd[1]: Startup finished in 755ms (kernel) + 5.267s (initrd) + 3.844s (userspace) = 9.867s. Jan 13 20:45:39.777611 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:40.195578 kubelet[1616]: E0113 20:45:40.195420 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:40.199413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:40.199638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:49.684618 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:44798.service - OpenSSH per-connection server daemon (10.0.0.1:44798). Jan 13 20:45:49.725337 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 44798 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:49.727137 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:49.731516 systemd-logind[1488]: New session 4 of user core. Jan 13 20:45:49.738104 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:45:49.794412 sshd[1631]: Connection closed by 10.0.0.1 port 44798 Jan 13 20:45:49.794830 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:49.806152 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:44798.service: Deactivated successfully. Jan 13 20:45:49.808274 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:45:49.810065 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:45:49.830368 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:44814.service - OpenSSH per-connection server daemon (10.0.0.1:44814). Jan 13 20:45:49.831442 systemd-logind[1488]: Removed session 4. Jan 13 20:45:49.870364 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 44814 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:49.871867 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:49.876376 systemd-logind[1488]: New session 5 of user core. Jan 13 20:45:49.886132 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:45:49.936124 sshd[1638]: Connection closed by 10.0.0.1 port 44814 Jan 13 20:45:49.936435 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:49.953741 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:44814.service: Deactivated successfully. Jan 13 20:45:49.956287 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:45:49.958302 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:45:49.969221 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:44820.service - OpenSSH per-connection server daemon (10.0.0.1:44820). Jan 13 20:45:49.970233 systemd-logind[1488]: Removed session 5. Jan 13 20:45:50.005901 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 44820 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:50.007667 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:50.011907 systemd-logind[1488]: New session 6 of user core. Jan 13 20:45:50.020991 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:45:50.076298 sshd[1645]: Connection closed by 10.0.0.1 port 44820 Jan 13 20:45:50.076696 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:50.087614 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:44820.service: Deactivated successfully. Jan 13 20:45:50.089173 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:45:50.090585 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:45:50.091896 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:44832.service - OpenSSH per-connection server daemon (10.0.0.1:44832). Jan 13 20:45:50.092687 systemd-logind[1488]: Removed session 6. Jan 13 20:45:50.134779 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 44832 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:50.136354 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:50.140332 systemd-logind[1488]: New session 7 of user core. Jan 13 20:45:50.150000 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:45:50.209515 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:45:50.209939 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:50.210920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:50.219078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:50.233634 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:50.235536 sshd[1652]: Connection closed by 10.0.0.1 port 44832 Jan 13 20:45:50.235950 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:50.240384 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:44832.service: Deactivated successfully. Jan 13 20:45:50.242846 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:45:50.244857 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:45:50.246202 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:44844.service - OpenSSH per-connection server daemon (10.0.0.1:44844). Jan 13 20:45:50.247091 systemd-logind[1488]: Removed session 7. Jan 13 20:45:50.291379 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 44844 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:50.293292 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:50.297598 systemd-logind[1488]: New session 8 of user core. Jan 13 20:45:50.312089 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:45:50.368104 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:45:50.368556 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:50.372951 sudo[1667]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:50.379455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:50.380045 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:45:50.380453 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:50.385913 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:50.390031 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:50.426529 kubelet[1672]: E0113 20:45:50.426420 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:50.428743 augenrules[1700]: No rules Jan 13 20:45:50.430953 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:50.431233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:50.433022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:50.433127 sudo[1666]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:50.433215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:50.434715 sshd[1663]: Connection closed by 10.0.0.1 port 44844 Jan 13 20:45:50.435104 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:50.445782 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:44844.service: Deactivated successfully. Jan 13 20:45:50.447586 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:45:50.449074 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:45:50.460145 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:44854.service - OpenSSH per-connection server daemon (10.0.0.1:44854). Jan 13 20:45:50.461132 systemd-logind[1488]: Removed session 8. Jan 13 20:45:50.501398 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 44854 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:50.503321 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:50.507501 systemd-logind[1488]: New session 9 of user core. Jan 13 20:45:50.517008 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:45:50.570780 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:45:50.571132 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:50.844102 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:45:50.844345 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:45:51.106366 dockerd[1732]: time="2025-01-13T20:45:51.106203297Z" level=info msg="Starting up" Jan 13 20:45:51.505902 dockerd[1732]: time="2025-01-13T20:45:51.505709884Z" level=info msg="Loading containers: start." Jan 13 20:45:51.671924 kernel: Initializing XFRM netlink socket Jan 13 20:45:51.753737 systemd-networkd[1418]: docker0: Link UP Jan 13 20:45:51.980040 dockerd[1732]: time="2025-01-13T20:45:51.979900993Z" level=info msg="Loading containers: done." Jan 13 20:45:51.995002 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3394611687-merged.mount: Deactivated successfully. Jan 13 20:45:51.997340 dockerd[1732]: time="2025-01-13T20:45:51.997292102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:45:51.997439 dockerd[1732]: time="2025-01-13T20:45:51.997414628Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:45:51.997588 dockerd[1732]: time="2025-01-13T20:45:51.997561810Z" level=info msg="Daemon has completed initialization" Jan 13 20:45:52.044407 dockerd[1732]: time="2025-01-13T20:45:52.044306513Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:45:52.044700 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:45:52.786320 containerd[1502]: time="2025-01-13T20:45:52.786259501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:45:55.143994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449654116.mount: Deactivated successfully. Jan 13 20:45:56.122825 containerd[1502]: time="2025-01-13T20:45:56.122749389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.123543 containerd[1502]: time="2025-01-13T20:45:56.123479779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 20:45:56.125050 containerd[1502]: time="2025-01-13T20:45:56.125019953Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.127751 containerd[1502]: time="2025-01-13T20:45:56.127717104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.128626 containerd[1502]: time="2025-01-13T20:45:56.128597715Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 3.342298394s" Jan 13 20:45:56.128667 containerd[1502]: time="2025-01-13T20:45:56.128624920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 20:45:56.130017 containerd[1502]: time="2025-01-13T20:45:56.129975667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:45:57.762273 containerd[1502]: time="2025-01-13T20:45:57.762216580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.763125 containerd[1502]: time="2025-01-13T20:45:57.763055515Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 20:45:57.764148 containerd[1502]: time="2025-01-13T20:45:57.764090405Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.767189 containerd[1502]: time="2025-01-13T20:45:57.767137015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.768455 containerd[1502]: time="2025-01-13T20:45:57.768405423Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.638399016s" Jan 13 20:45:57.768455 containerd[1502]: time="2025-01-13T20:45:57.768446131Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 20:45:57.768971 containerd[1502]: time="2025-01-13T20:45:57.768931231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:46:00.335565 containerd[1502]: time="2025-01-13T20:46:00.335496802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.336512 containerd[1502]: time="2025-01-13T20:46:00.336458986Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 20:46:00.337625 containerd[1502]: time="2025-01-13T20:46:00.337586966Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.341460 containerd[1502]: time="2025-01-13T20:46:00.341402017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.342441 containerd[1502]: time="2025-01-13T20:46:00.342406818Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 2.573439779s" Jan 13 20:46:00.342486 containerd[1502]: time="2025-01-13T20:46:00.342442703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 20:46:00.343127 containerd[1502]: time="2025-01-13T20:46:00.343043514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:46:00.500597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:46:00.512080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:00.662960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:00.668305 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:46:00.895666 kubelet[1999]: E0113 20:46:00.895600 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:46:00.900134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:46:00.900438 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:46:01.938494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769841512.mount: Deactivated successfully. Jan 13 20:46:02.896980 containerd[1502]: time="2025-01-13T20:46:02.896901222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:02.897591 containerd[1502]: time="2025-01-13T20:46:02.897551317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 20:46:02.900551 containerd[1502]: time="2025-01-13T20:46:02.899215984Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:02.902518 containerd[1502]: time="2025-01-13T20:46:02.902483023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:02.903167 containerd[1502]: time="2025-01-13T20:46:02.903128161Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.560040558s" Jan 13 20:46:02.903203 containerd[1502]: time="2025-01-13T20:46:02.903167941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:46:02.903723 containerd[1502]: time="2025-01-13T20:46:02.903638638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:46:03.801403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940098994.mount: Deactivated successfully. Jan 13 20:46:05.377538 containerd[1502]: time="2025-01-13T20:46:05.377474625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.378775 containerd[1502]: time="2025-01-13T20:46:05.378733956Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:46:05.380409 containerd[1502]: time="2025-01-13T20:46:05.380381671Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.384147 containerd[1502]: time="2025-01-13T20:46:05.384114596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.385434 containerd[1502]: time="2025-01-13T20:46:05.385375601Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.481663246s" Jan 13 20:46:05.385434 containerd[1502]: time="2025-01-13T20:46:05.385422403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:46:05.385931 containerd[1502]: time="2025-01-13T20:46:05.385909947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:46:05.875910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273731543.mount: Deactivated successfully. Jan 13 20:46:05.882519 containerd[1502]: time="2025-01-13T20:46:05.882460782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.883438 containerd[1502]: time="2025-01-13T20:46:05.883350521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 20:46:05.885007 containerd[1502]: time="2025-01-13T20:46:05.884956588Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.887492 containerd[1502]: time="2025-01-13T20:46:05.887463456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.888521 containerd[1502]: time="2025-01-13T20:46:05.888470876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.534355ms" Jan 13 20:46:05.888521 containerd[1502]: time="2025-01-13T20:46:05.888518120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 20:46:05.889079 containerd[1502]: time="2025-01-13T20:46:05.889000109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:46:06.599962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332103988.mount: Deactivated successfully. Jan 13 20:46:08.239358 containerd[1502]: time="2025-01-13T20:46:08.239297917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:08.241839 containerd[1502]: time="2025-01-13T20:46:08.241795390Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 20:46:08.243131 containerd[1502]: time="2025-01-13T20:46:08.243108253Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:08.246213 containerd[1502]: time="2025-01-13T20:46:08.246171215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:08.247380 containerd[1502]: time="2025-01-13T20:46:08.247307524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.35828006s" Jan 13 20:46:08.247380 containerd[1502]: time="2025-01-13T20:46:08.247373149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 20:46:10.246234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:10.266076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:10.290068 systemd[1]: Reloading requested from client PID 2146 ('systemctl') (unit session-9.scope)... Jan 13 20:46:10.290085 systemd[1]: Reloading... Jan 13 20:46:10.379945 zram_generator::config[2188]: No configuration found. Jan 13 20:46:10.612246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:46:10.718739 systemd[1]: Reloading finished in 428 ms. Jan 13 20:46:10.775539 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:46:10.775689 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:46:10.776121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:10.778343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:10.936560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:10.941606 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:46:10.977170 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:10.977170 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:46:10.977170 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:10.978217 kubelet[2234]: I0113 20:46:10.978165 2234 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:46:11.265580 kubelet[2234]: I0113 20:46:11.265537 2234 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:46:11.265580 kubelet[2234]: I0113 20:46:11.265572 2234 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:46:11.265826 kubelet[2234]: I0113 20:46:11.265805 2234 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:46:11.290098 kubelet[2234]: I0113 20:46:11.290034 2234 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:46:11.290388 kubelet[2234]: E0113 20:46:11.290350 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:11.300738 kubelet[2234]: E0113 20:46:11.300693 2234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:46:11.300738 kubelet[2234]: I0113 20:46:11.300727 2234 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:46:11.306750 kubelet[2234]: I0113 20:46:11.306718 2234 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:46:11.307775 kubelet[2234]: I0113 20:46:11.307741 2234 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:46:11.307965 kubelet[2234]: I0113 20:46:11.307925 2234 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:46:11.308107 kubelet[2234]: I0113 20:46:11.307952 2234 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:46:11.308201 kubelet[2234]: I0113 20:46:11.308116 2234 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:46:11.308201 kubelet[2234]: I0113 20:46:11.308125 2234 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:46:11.308255 kubelet[2234]: I0113 20:46:11.308239 2234 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:11.309664 kubelet[2234]: I0113 20:46:11.309638 2234 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:46:11.309664 kubelet[2234]: I0113 20:46:11.309660 2234 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:46:11.309735 kubelet[2234]: I0113 20:46:11.309695 2234 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:46:11.309735 kubelet[2234]: I0113 20:46:11.309711 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:46:11.313992 kubelet[2234]: I0113 20:46:11.313973 2234 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:46:11.315278 kubelet[2234]: W0113 20:46:11.315042 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:11.315278 kubelet[2234]: E0113 20:46:11.315107 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:11.315755 kubelet[2234]: I0113 20:46:11.315733 2234 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:46:11.316596 kubelet[2234]: W0113 20:46:11.316550 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:11.316596 kubelet[2234]: W0113 20:46:11.316591 2234 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:46:11.316666 kubelet[2234]: E0113 20:46:11.316621 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:11.317195 kubelet[2234]: I0113 20:46:11.317170 2234 server.go:1269] "Started kubelet" Jan 13 20:46:11.317943 kubelet[2234]: I0113 20:46:11.317818 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:46:11.318354 kubelet[2234]: I0113 20:46:11.318339 2234 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:46:11.319893 kubelet[2234]: I0113 20:46:11.318507 2234 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:46:11.319893 kubelet[2234]: I0113 20:46:11.318569 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:46:11.319893 kubelet[2234]: I0113 20:46:11.319238 2234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:46:11.319893 kubelet[2234]: I0113 20:46:11.319418 2234 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:46:11.320327 kubelet[2234]: I0113 20:46:11.320313 2234 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:46:11.320445 kubelet[2234]: I0113 20:46:11.320434 2234 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:46:11.320561 kubelet[2234]: I0113 20:46:11.320551 2234 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:46:11.320889 kubelet[2234]: W0113 20:46:11.320844 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:11.320986 kubelet[2234]: E0113 20:46:11.320954 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:11.321199 kubelet[2234]: E0113 20:46:11.321170 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.321390 kubelet[2234]: E0113 20:46:11.321370 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Jan 13 20:46:11.321907 kubelet[2234]: I0113 20:46:11.321892 2234 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:46:11.323000 kubelet[2234]: E0113 20:46:11.322308 2234 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:46:11.323043 kubelet[2234]: I0113 20:46:11.323000 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:46:11.323125 kubelet[2234]: E0113 20:46:11.321174 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b769dd74d8c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:46:11.317149068 +0000 UTC m=+0.371585025,LastTimestamp:2025-01-13 20:46:11.317149068 +0000 UTC m=+0.371585025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:46:11.324259 kubelet[2234]: I0113 20:46:11.324237 2234 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:46:11.337556 kubelet[2234]: I0113 20:46:11.337391 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:46:11.340101 kubelet[2234]: I0113 20:46:11.340008 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:46:11.340142 kubelet[2234]: I0113 20:46:11.340109 2234 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:46:11.340194 kubelet[2234]: I0113 20:46:11.340177 2234 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:46:11.340261 kubelet[2234]: I0113 20:46:11.340237 2234 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:46:11.340261 kubelet[2234]: E0113 20:46:11.340241 2234 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:46:11.340261 kubelet[2234]: I0113 20:46:11.340251 2234 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:46:11.340343 kubelet[2234]: I0113 20:46:11.340266 2234 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:11.341071 kubelet[2234]: W0113 20:46:11.341040 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:11.341108 kubelet[2234]: E0113 20:46:11.341077 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:11.422434 kubelet[2234]: E0113 20:46:11.422356 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.440678 kubelet[2234]: E0113 20:46:11.440625 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:46:11.522052 kubelet[2234]: E0113 20:46:11.521943 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Jan 13 20:46:11.522962 kubelet[2234]: E0113 20:46:11.522933 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.623463 kubelet[2234]: E0113 20:46:11.623413 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.641691 kubelet[2234]: E0113 20:46:11.641628 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:46:11.724296 kubelet[2234]: E0113 20:46:11.724252 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.824805 kubelet[2234]: E0113 20:46:11.824667 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.922596 kubelet[2234]: E0113 20:46:11.922521 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Jan 13 20:46:11.924927 kubelet[2234]: E0113 20:46:11.924893 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:11.956158 kubelet[2234]: I0113 20:46:11.956005 2234 policy_none.go:49] "None policy: Start" Jan 13 20:46:11.957367 kubelet[2234]: I0113 20:46:11.957207 2234 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:46:11.957367 kubelet[2234]: I0113 20:46:11.957240 2234 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:46:11.966565 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:46:11.986719 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:46:11.990401 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:46:11.999934 kubelet[2234]: I0113 20:46:11.999859 2234 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:46:12.000391 kubelet[2234]: I0113 20:46:12.000150 2234 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:46:12.000391 kubelet[2234]: I0113 20:46:12.000162 2234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:46:12.000391 kubelet[2234]: I0113 20:46:12.000367 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:46:12.001172 kubelet[2234]: E0113 20:46:12.001152 2234 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:46:12.051239 systemd[1]: Created slice kubepods-burstable-pod47851f885719150f84760a209855dd59.slice - libcontainer container kubepods-burstable-pod47851f885719150f84760a209855dd59.slice. Jan 13 20:46:12.071583 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 20:46:12.081900 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 20:46:12.102314 kubelet[2234]: I0113 20:46:12.102277 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:12.102645 kubelet[2234]: E0113 20:46:12.102621 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 13 20:46:12.125177 kubelet[2234]: I0113 20:46:12.125126 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:12.125177 kubelet[2234]: I0113 20:46:12.125169 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:12.125177 kubelet[2234]: I0113 20:46:12.125186 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:12.125177 kubelet[2234]: I0113 20:46:12.125201 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:12.125424 kubelet[2234]: I0113 20:46:12.125218 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:46:12.125424 kubelet[2234]: I0113 20:46:12.125276 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:12.125424 kubelet[2234]: I0113 20:46:12.125311 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:12.125424 kubelet[2234]: I0113 20:46:12.125342 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:12.125424 kubelet[2234]: I0113 20:46:12.125373 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:12.304360 kubelet[2234]: I0113 20:46:12.304311 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:12.304753 kubelet[2234]: E0113 20:46:12.304725 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 13 20:46:12.369204 kubelet[2234]: E0113 20:46:12.368932 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:12.369855 containerd[1502]: time="2025-01-13T20:46:12.369795301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47851f885719150f84760a209855dd59,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:12.380154 kubelet[2234]: E0113 20:46:12.380124 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:12.380696 containerd[1502]: time="2025-01-13T20:46:12.380650189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:12.385070 kubelet[2234]: E0113 20:46:12.385021 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:12.385620 containerd[1502]: time="2025-01-13T20:46:12.385576215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:12.507800 kubelet[2234]: W0113 20:46:12.507732 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:12.507800 kubelet[2234]: E0113 20:46:12.507803 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:12.655395 kubelet[2234]: W0113 20:46:12.655280 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:12.655395 kubelet[2234]: E0113 20:46:12.655332 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:12.695034 kubelet[2234]: W0113 20:46:12.694970 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:12.695132 kubelet[2234]: E0113 20:46:12.695032 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:12.706226 kubelet[2234]: I0113 20:46:12.706199 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:12.706432 kubelet[2234]: E0113 20:46:12.706404 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 13 20:46:12.722830 kubelet[2234]: E0113 20:46:12.722782 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Jan 13 20:46:12.757212 kubelet[2234]: W0113 20:46:12.757162 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:12.757212 kubelet[2234]: E0113 20:46:12.757207 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:13.376668 kubelet[2234]: E0113 20:46:13.376611 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:13.432850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858578157.mount: Deactivated successfully. Jan 13 20:46:13.508788 kubelet[2234]: I0113 20:46:13.508760 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:13.509138 kubelet[2234]: E0113 20:46:13.509093 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 13 20:46:14.323523 kubelet[2234]: E0113 20:46:14.323466 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="3.2s" Jan 13 20:46:14.572038 kubelet[2234]: W0113 20:46:14.571977 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:14.572038 kubelet[2234]: E0113 20:46:14.572032 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:14.664437 containerd[1502]: time="2025-01-13T20:46:14.664296018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:14.666824 containerd[1502]: time="2025-01-13T20:46:14.666763476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:46:14.670556 containerd[1502]: time="2025-01-13T20:46:14.670496540Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:14.671698 containerd[1502]: time="2025-01-13T20:46:14.671643861Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:14.675033 containerd[1502]: time="2025-01-13T20:46:14.674972983Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:14.677055 containerd[1502]: time="2025-01-13T20:46:14.677006758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:46:14.678504 containerd[1502]: time="2025-01-13T20:46:14.678475102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:46:14.680565 containerd[1502]: time="2025-01-13T20:46:14.680496226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:14.681620 containerd[1502]: time="2025-01-13T20:46:14.681406516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.311442308s" Jan 13 20:46:14.685000 containerd[1502]: time="2025-01-13T20:46:14.684927590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.304160112s" Jan 13 20:46:14.686931 containerd[1502]: time="2025-01-13T20:46:14.686827482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.301135182s" Jan 13 20:46:14.866348 containerd[1502]: time="2025-01-13T20:46:14.866020871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:14.866348 containerd[1502]: time="2025-01-13T20:46:14.866075873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:14.866348 containerd[1502]: time="2025-01-13T20:46:14.866086709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.866348 containerd[1502]: time="2025-01-13T20:46:14.866172406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.867554 containerd[1502]: time="2025-01-13T20:46:14.865376991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:14.867554 containerd[1502]: time="2025-01-13T20:46:14.867528529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:14.867554 containerd[1502]: time="2025-01-13T20:46:14.867544297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.867722 containerd[1502]: time="2025-01-13T20:46:14.867657901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.871545 containerd[1502]: time="2025-01-13T20:46:14.871321457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:14.871660 containerd[1502]: time="2025-01-13T20:46:14.871511524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:14.875521 containerd[1502]: time="2025-01-13T20:46:14.875355666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.876637 containerd[1502]: time="2025-01-13T20:46:14.875474402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:14.891098 systemd[1]: Started cri-containerd-09d7fcfa914a9e448f79babfda6d3718241846379cade01bd0b8518fcd79ca9d.scope - libcontainer container 09d7fcfa914a9e448f79babfda6d3718241846379cade01bd0b8518fcd79ca9d. Jan 13 20:46:14.895891 systemd[1]: Started cri-containerd-1f23beae07b60ada0dce8b9e35a999df946e5e009dd7248d8e23b5d8ffd49151.scope - libcontainer container 1f23beae07b60ada0dce8b9e35a999df946e5e009dd7248d8e23b5d8ffd49151. Jan 13 20:46:14.897591 systemd[1]: Started cri-containerd-65146f249dc6efc704a997c9527a53733eb87e6fbddde01ef22364b02e39009a.scope - libcontainer container 65146f249dc6efc704a997c9527a53733eb87e6fbddde01ef22364b02e39009a. Jan 13 20:46:14.940115 containerd[1502]: time="2025-01-13T20:46:14.939740124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"09d7fcfa914a9e448f79babfda6d3718241846379cade01bd0b8518fcd79ca9d\"" Jan 13 20:46:14.941080 kubelet[2234]: E0113 20:46:14.941003 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:14.942354 containerd[1502]: time="2025-01-13T20:46:14.942311473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f23beae07b60ada0dce8b9e35a999df946e5e009dd7248d8e23b5d8ffd49151\"" Jan 13 20:46:14.944842 kubelet[2234]: E0113 20:46:14.944701 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:14.944917 containerd[1502]: time="2025-01-13T20:46:14.944757489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47851f885719150f84760a209855dd59,Namespace:kube-system,Attempt:0,} returns sandbox id \"65146f249dc6efc704a997c9527a53733eb87e6fbddde01ef22364b02e39009a\"" Jan 13 20:46:14.945745 kubelet[2234]: E0113 20:46:14.945642 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:14.946858 containerd[1502]: time="2025-01-13T20:46:14.946819782Z" level=info msg="CreateContainer within sandbox \"09d7fcfa914a9e448f79babfda6d3718241846379cade01bd0b8518fcd79ca9d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:46:14.947666 containerd[1502]: time="2025-01-13T20:46:14.947553188Z" level=info msg="CreateContainer within sandbox \"1f23beae07b60ada0dce8b9e35a999df946e5e009dd7248d8e23b5d8ffd49151\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:46:14.949071 containerd[1502]: time="2025-01-13T20:46:14.948977525Z" level=info msg="CreateContainer within sandbox \"65146f249dc6efc704a997c9527a53733eb87e6fbddde01ef22364b02e39009a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:46:14.966604 kubelet[2234]: W0113 20:46:14.966544 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Jan 13 20:46:14.966688 kubelet[2234]: E0113 20:46:14.966607 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:46:14.976462 containerd[1502]: time="2025-01-13T20:46:14.976326533Z" level=info msg="CreateContainer within sandbox \"1f23beae07b60ada0dce8b9e35a999df946e5e009dd7248d8e23b5d8ffd49151\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc6419be6d9c8051a4eb0af7c733803d88a6aa8bb727595d521067fe33630ea5\"" Jan 13 20:46:14.977045 containerd[1502]: time="2025-01-13T20:46:14.977013768Z" level=info msg="StartContainer for \"bc6419be6d9c8051a4eb0af7c733803d88a6aa8bb727595d521067fe33630ea5\"" Jan 13 20:46:14.978778 containerd[1502]: time="2025-01-13T20:46:14.978708977Z" level=info msg="CreateContainer within sandbox \"09d7fcfa914a9e448f79babfda6d3718241846379cade01bd0b8518fcd79ca9d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0adc4859655d9faa9ae0344498c875365419aa2c168a5d48407c6d493e2e752\"" Jan 13 20:46:14.979137 containerd[1502]: time="2025-01-13T20:46:14.979102023Z" level=info msg="StartContainer for \"d0adc4859655d9faa9ae0344498c875365419aa2c168a5d48407c6d493e2e752\"" Jan 13 20:46:14.980463 containerd[1502]: time="2025-01-13T20:46:14.980442179Z" level=info msg="CreateContainer within sandbox \"65146f249dc6efc704a997c9527a53733eb87e6fbddde01ef22364b02e39009a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"653fd84930bb86c283414627d1e2784c19238e4188df1f99fb49e635f966987b\"" Jan 13 20:46:14.980816 containerd[1502]: time="2025-01-13T20:46:14.980778086Z" level=info msg="StartContainer for \"653fd84930bb86c283414627d1e2784c19238e4188df1f99fb49e635f966987b\"" Jan 13 20:46:15.006052 systemd[1]: Started cri-containerd-bc6419be6d9c8051a4eb0af7c733803d88a6aa8bb727595d521067fe33630ea5.scope - libcontainer container bc6419be6d9c8051a4eb0af7c733803d88a6aa8bb727595d521067fe33630ea5. Jan 13 20:46:15.016072 systemd[1]: Started cri-containerd-653fd84930bb86c283414627d1e2784c19238e4188df1f99fb49e635f966987b.scope - libcontainer container 653fd84930bb86c283414627d1e2784c19238e4188df1f99fb49e635f966987b. Jan 13 20:46:15.017635 systemd[1]: Started cri-containerd-d0adc4859655d9faa9ae0344498c875365419aa2c168a5d48407c6d493e2e752.scope - libcontainer container d0adc4859655d9faa9ae0344498c875365419aa2c168a5d48407c6d493e2e752. Jan 13 20:46:15.112081 kubelet[2234]: I0113 20:46:15.111605 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:15.112081 kubelet[2234]: E0113 20:46:15.111921 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Jan 13 20:46:15.161708 containerd[1502]: time="2025-01-13T20:46:15.160699487Z" level=info msg="StartContainer for \"653fd84930bb86c283414627d1e2784c19238e4188df1f99fb49e635f966987b\" returns successfully" Jan 13 20:46:15.161708 containerd[1502]: time="2025-01-13T20:46:15.160980335Z" level=info msg="StartContainer for \"bc6419be6d9c8051a4eb0af7c733803d88a6aa8bb727595d521067fe33630ea5\" returns successfully" Jan 13 20:46:15.161708 containerd[1502]: time="2025-01-13T20:46:15.161029752Z" level=info msg="StartContainer for \"d0adc4859655d9faa9ae0344498c875365419aa2c168a5d48407c6d493e2e752\" returns successfully" Jan 13 20:46:15.351797 kubelet[2234]: E0113 20:46:15.351756 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:15.353798 kubelet[2234]: E0113 20:46:15.353774 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:15.355734 kubelet[2234]: E0113 20:46:15.355710 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.316610 kubelet[2234]: I0113 20:46:16.316542 2234 apiserver.go:52] "Watching apiserver" Jan 13 20:46:16.321297 kubelet[2234]: I0113 20:46:16.321253 2234 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:46:16.357462 kubelet[2234]: E0113 20:46:16.357417 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.357857 kubelet[2234]: E0113 20:46:16.357831 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.634916 kubelet[2234]: E0113 20:46:16.634629 2234 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:46:16.983280 kubelet[2234]: E0113 20:46:16.983104 2234 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:46:17.426342 kubelet[2234]: E0113 20:46:17.426288 2234 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:46:17.527516 kubelet[2234]: E0113 20:46:17.527473 2234 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:46:18.313989 kubelet[2234]: I0113 20:46:18.313954 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:18.320261 kubelet[2234]: I0113 20:46:18.320220 2234 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:46:18.320261 kubelet[2234]: E0113 20:46:18.320252 2234 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 20:46:19.365456 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-9.scope)... Jan 13 20:46:19.365470 systemd[1]: Reloading... Jan 13 20:46:19.450496 zram_generator::config[2565]: No configuration found. Jan 13 20:46:19.550441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:46:19.642840 systemd[1]: Reloading finished in 276 ms. Jan 13 20:46:19.687447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:19.716213 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:46:19.716495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:19.724172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:19.875811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:19.880931 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:46:19.919260 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:19.919260 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:46:19.919260 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:19.919260 kubelet[2604]: I0113 20:46:19.919233 2604 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:46:19.925631 kubelet[2604]: I0113 20:46:19.925605 2604 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:46:19.925631 kubelet[2604]: I0113 20:46:19.925625 2604 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:46:19.925829 kubelet[2604]: I0113 20:46:19.925813 2604 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:46:19.926902 kubelet[2604]: I0113 20:46:19.926884 2604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:46:19.928500 kubelet[2604]: I0113 20:46:19.928472 2604 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:46:19.931589 kubelet[2604]: E0113 20:46:19.931568 2604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:46:19.931589 kubelet[2604]: I0113 20:46:19.931589 2604 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:46:19.935608 kubelet[2604]: I0113 20:46:19.935591 2604 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:46:19.935716 kubelet[2604]: I0113 20:46:19.935702 2604 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:46:19.935858 kubelet[2604]: I0113 20:46:19.935832 2604 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:46:19.936042 kubelet[2604]: I0113 20:46:19.935860 2604 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:46:19.936117 kubelet[2604]: I0113 20:46:19.936051 2604 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:46:19.936117 kubelet[2604]: I0113 20:46:19.936062 2604 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:46:19.936117 kubelet[2604]: I0113 20:46:19.936092 2604 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:19.936212 kubelet[2604]: I0113 20:46:19.936198 2604 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:46:19.936236 kubelet[2604]: I0113 20:46:19.936211 2604 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:46:19.936256 kubelet[2604]: I0113 20:46:19.936245 2604 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:46:19.936275 kubelet[2604]: I0113 20:46:19.936261 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:46:19.938919 kubelet[2604]: I0113 20:46:19.937757 2604 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:46:19.938919 kubelet[2604]: I0113 20:46:19.938407 2604 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:46:19.939251 kubelet[2604]: I0113 20:46:19.939228 2604 server.go:1269] "Started kubelet" Jan 13 20:46:19.940631 kubelet[2604]: I0113 20:46:19.940579 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:46:19.940808 kubelet[2604]: I0113 20:46:19.940776 2604 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:46:19.941024 kubelet[2604]: I0113 20:46:19.941001 2604 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:46:19.942942 kubelet[2604]: I0113 20:46:19.942350 2604 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:46:19.942942 kubelet[2604]: I0113 20:46:19.942497 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:46:19.943449 kubelet[2604]: I0113 20:46:19.943286 2604 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:46:19.943535 kubelet[2604]: I0113 20:46:19.943504 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:46:19.943628 kubelet[2604]: I0113 20:46:19.943611 2604 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:46:19.943758 kubelet[2604]: I0113 20:46:19.943743 2604 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:46:19.944682 kubelet[2604]: E0113 20:46:19.944659 2604 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:46:19.947021 kubelet[2604]: I0113 20:46:19.947005 2604 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:46:19.947101 kubelet[2604]: I0113 20:46:19.947082 2604 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:46:19.948683 kubelet[2604]: I0113 20:46:19.948665 2604 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:46:19.949015 kubelet[2604]: E0113 20:46:19.949001 2604 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:46:19.958549 kubelet[2604]: I0113 20:46:19.958407 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:46:19.959558 kubelet[2604]: I0113 20:46:19.959544 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:46:19.959633 kubelet[2604]: I0113 20:46:19.959624 2604 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:46:19.959696 kubelet[2604]: I0113 20:46:19.959687 2604 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:46:19.959778 kubelet[2604]: E0113 20:46:19.959763 2604 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:46:19.990994 kubelet[2604]: I0113 20:46:19.990964 2604 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:46:19.990994 kubelet[2604]: I0113 20:46:19.990981 2604 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:46:19.990994 kubelet[2604]: I0113 20:46:19.990998 2604 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:19.991166 kubelet[2604]: I0113 20:46:19.991116 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:46:19.991166 kubelet[2604]: I0113 20:46:19.991126 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:46:19.991166 kubelet[2604]: I0113 20:46:19.991142 2604 policy_none.go:49] "None policy: Start" Jan 13 20:46:19.991542 kubelet[2604]: I0113 20:46:19.991520 2604 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:46:19.991542 kubelet[2604]: I0113 20:46:19.991538 2604 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:46:19.991689 kubelet[2604]: I0113 20:46:19.991673 2604 state_mem.go:75] "Updated machine memory state" Jan 13 20:46:19.996120 kubelet[2604]: I0113 20:46:19.996101 2604 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:46:19.996394 kubelet[2604]: I0113 20:46:19.996256 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:46:19.996394 kubelet[2604]: I0113 20:46:19.996268 2604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:46:19.996477 kubelet[2604]: I0113 20:46:19.996400 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:46:20.101901 kubelet[2604]: I0113 20:46:20.101843 2604 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:46:20.110508 kubelet[2604]: I0113 20:46:20.110455 2604 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 20:46:20.110618 kubelet[2604]: I0113 20:46:20.110562 2604 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:46:20.245003 kubelet[2604]: I0113 20:46:20.244836 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:20.245003 kubelet[2604]: I0113 20:46:20.244902 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:20.245003 kubelet[2604]: I0113 20:46:20.244920 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:20.245003 kubelet[2604]: I0113 20:46:20.244942 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:20.245003 kubelet[2604]: I0113 20:46:20.244961 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:20.245251 kubelet[2604]: I0113 20:46:20.245010 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:46:20.245251 kubelet[2604]: I0113 20:46:20.245048 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47851f885719150f84760a209855dd59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47851f885719150f84760a209855dd59\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:46:20.245251 kubelet[2604]: I0113 20:46:20.245077 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:20.245251 kubelet[2604]: I0113 20:46:20.245098 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:46:20.367701 kubelet[2604]: E0113 20:46:20.367643 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:20.369966 kubelet[2604]: E0113 20:46:20.369942 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:20.370037 kubelet[2604]: E0113 20:46:20.370018 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:20.936614 kubelet[2604]: I0113 20:46:20.936542 2604 apiserver.go:52] "Watching apiserver" Jan 13 20:46:20.975251 kubelet[2604]: E0113 20:46:20.975217 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:20.976331 kubelet[2604]: E0113 20:46:20.976288 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:20.976417 kubelet[2604]: E0113 20:46:20.975868 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:21.043986 kubelet[2604]: I0113 20:46:21.043923 2604 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:46:21.165005 kubelet[2604]: I0113 20:46:21.164643 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1646025660000001 podStartE2EDuration="1.164602566s" podCreationTimestamp="2025-01-13 20:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:21.036691762 +0000 UTC m=+1.148306849" watchObservedRunningTime="2025-01-13 20:46:21.164602566 +0000 UTC m=+1.276217653" Jan 13 20:46:21.247938 kubelet[2604]: I0113 20:46:21.246971 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.246954103 podStartE2EDuration="1.246954103s" podCreationTimestamp="2025-01-13 20:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:21.246760649 +0000 UTC m=+1.358375736" watchObservedRunningTime="2025-01-13 20:46:21.246954103 +0000 UTC m=+1.358569190" Jan 13 20:46:21.247938 kubelet[2604]: I0113 20:46:21.247082 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.24707788 podStartE2EDuration="1.24707788s" podCreationTimestamp="2025-01-13 20:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:21.196379076 +0000 UTC m=+1.307994153" watchObservedRunningTime="2025-01-13 20:46:21.24707788 +0000 UTC m=+1.358692967" Jan 13 20:46:21.977175 kubelet[2604]: E0113 20:46:21.977117 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:21.977615 kubelet[2604]: E0113 20:46:21.977378 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:23.912305 update_engine[1489]: I20250113 20:46:23.912227 1489 update_attempter.cc:509] Updating boot flags... Jan 13 20:46:23.951609 kubelet[2604]: I0113 20:46:23.950933 2604 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:46:23.951609 kubelet[2604]: I0113 20:46:23.951418 2604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:46:23.953676 containerd[1502]: time="2025-01-13T20:46:23.951213492Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:46:23.992559 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2678) Jan 13 20:46:24.045973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2679) Jan 13 20:46:24.070930 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2679) Jan 13 20:46:25.063320 systemd[1]: Created slice kubepods-besteffort-pod8e41df0a_4fca_4bc0_9e32_00fa56457088.slice - libcontainer container kubepods-besteffort-pod8e41df0a_4fca_4bc0_9e32_00fa56457088.slice. Jan 13 20:46:25.100201 kubelet[2604]: I0113 20:46:25.100167 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e41df0a-4fca-4bc0-9e32-00fa56457088-lib-modules\") pod \"kube-proxy-db2f6\" (UID: \"8e41df0a-4fca-4bc0-9e32-00fa56457088\") " pod="kube-system/kube-proxy-db2f6" Jan 13 20:46:25.100201 kubelet[2604]: I0113 20:46:25.100200 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e41df0a-4fca-4bc0-9e32-00fa56457088-kube-proxy\") pod \"kube-proxy-db2f6\" (UID: \"8e41df0a-4fca-4bc0-9e32-00fa56457088\") " pod="kube-system/kube-proxy-db2f6" Jan 13 20:46:25.100201 kubelet[2604]: I0113 20:46:25.100213 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e41df0a-4fca-4bc0-9e32-00fa56457088-xtables-lock\") pod \"kube-proxy-db2f6\" (UID: \"8e41df0a-4fca-4bc0-9e32-00fa56457088\") " pod="kube-system/kube-proxy-db2f6" Jan 13 20:46:25.100609 kubelet[2604]: I0113 20:46:25.100229 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkxb\" (UniqueName: \"kubernetes.io/projected/8e41df0a-4fca-4bc0-9e32-00fa56457088-kube-api-access-cxkxb\") pod \"kube-proxy-db2f6\" (UID: \"8e41df0a-4fca-4bc0-9e32-00fa56457088\") " pod="kube-system/kube-proxy-db2f6" Jan 13 20:46:25.288179 systemd[1]: Created slice kubepods-besteffort-pode48da301_557f_465d_a22d_b17e8fb6e37d.slice - libcontainer container kubepods-besteffort-pode48da301_557f_465d_a22d_b17e8fb6e37d.slice. Jan 13 20:46:25.301657 kubelet[2604]: I0113 20:46:25.301618 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49fcr\" (UniqueName: \"kubernetes.io/projected/e48da301-557f-465d-a22d-b17e8fb6e37d-kube-api-access-49fcr\") pod \"tigera-operator-76c4976dd7-sft9w\" (UID: \"e48da301-557f-465d-a22d-b17e8fb6e37d\") " pod="tigera-operator/tigera-operator-76c4976dd7-sft9w" Jan 13 20:46:25.301657 kubelet[2604]: I0113 20:46:25.301656 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e48da301-557f-465d-a22d-b17e8fb6e37d-var-lib-calico\") pod \"tigera-operator-76c4976dd7-sft9w\" (UID: \"e48da301-557f-465d-a22d-b17e8fb6e37d\") " pod="tigera-operator/tigera-operator-76c4976dd7-sft9w" Jan 13 20:46:25.372185 kubelet[2604]: E0113 20:46:25.372051 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:25.373388 containerd[1502]: time="2025-01-13T20:46:25.373286361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db2f6,Uid:8e41df0a-4fca-4bc0-9e32-00fa56457088,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:25.395788 containerd[1502]: time="2025-01-13T20:46:25.395713229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:25.395788 containerd[1502]: time="2025-01-13T20:46:25.395768017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:25.395788 containerd[1502]: time="2025-01-13T20:46:25.395784573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:25.396726 containerd[1502]: time="2025-01-13T20:46:25.396435789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:25.414140 systemd[1]: run-containerd-runc-k8s.io-01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d-runc.bcMJqT.mount: Deactivated successfully. Jan 13 20:46:25.424201 systemd[1]: Started cri-containerd-01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d.scope - libcontainer container 01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d. Jan 13 20:46:25.452118 containerd[1502]: time="2025-01-13T20:46:25.452073060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db2f6,Uid:8e41df0a-4fca-4bc0-9e32-00fa56457088,Namespace:kube-system,Attempt:0,} returns sandbox id \"01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d\"" Jan 13 20:46:25.452924 kubelet[2604]: E0113 20:46:25.452903 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:25.453913 sudo[1712]: pam_unix(sudo:session): session closed for user root Jan 13 20:46:25.455297 containerd[1502]: time="2025-01-13T20:46:25.455270711Z" level=info msg="CreateContainer within sandbox \"01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:46:25.455754 sshd[1711]: Connection closed by 10.0.0.1 port 44854 Jan 13 20:46:25.456176 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:25.460503 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:44854.service: Deactivated successfully. Jan 13 20:46:25.462712 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:46:25.462938 systemd[1]: session-9.scope: Consumed 4.148s CPU time, 153.0M memory peak, 0B memory swap peak. Jan 13 20:46:25.463422 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:46:25.464359 systemd-logind[1488]: Removed session 9. Jan 13 20:46:25.484304 containerd[1502]: time="2025-01-13T20:46:25.484259857Z" level=info msg="CreateContainer within sandbox \"01575b7beb25ef5aaced368d01f0b727ffdefebde3b985bf3545b425ee04211d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"897b110215aa089d6758c57b5f0fd4a55af02c90adb56aaed47b4545824275ff\"" Jan 13 20:46:25.484786 containerd[1502]: time="2025-01-13T20:46:25.484768212Z" level=info msg="StartContainer for \"897b110215aa089d6758c57b5f0fd4a55af02c90adb56aaed47b4545824275ff\"" Jan 13 20:46:25.511998 systemd[1]: Started cri-containerd-897b110215aa089d6758c57b5f0fd4a55af02c90adb56aaed47b4545824275ff.scope - libcontainer container 897b110215aa089d6758c57b5f0fd4a55af02c90adb56aaed47b4545824275ff. Jan 13 20:46:25.539657 containerd[1502]: time="2025-01-13T20:46:25.539605643Z" level=info msg="StartContainer for \"897b110215aa089d6758c57b5f0fd4a55af02c90adb56aaed47b4545824275ff\" returns successfully" Jan 13 20:46:25.592966 containerd[1502]: time="2025-01-13T20:46:25.592927640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-sft9w,Uid:e48da301-557f-465d-a22d-b17e8fb6e37d,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:46:25.617336 containerd[1502]: time="2025-01-13T20:46:25.617259750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:25.617336 containerd[1502]: time="2025-01-13T20:46:25.617311763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:25.617336 containerd[1502]: time="2025-01-13T20:46:25.617325303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:25.617527 containerd[1502]: time="2025-01-13T20:46:25.617392338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:25.636050 systemd[1]: Started cri-containerd-aa2ecb7f74e5077b95a76e0ee960e993f89cc55868081d8c01890e009128011d.scope - libcontainer container aa2ecb7f74e5077b95a76e0ee960e993f89cc55868081d8c01890e009128011d. Jan 13 20:46:25.672910 containerd[1502]: time="2025-01-13T20:46:25.672851913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-sft9w,Uid:e48da301-557f-465d-a22d-b17e8fb6e37d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa2ecb7f74e5077b95a76e0ee960e993f89cc55868081d8c01890e009128011d\"" Jan 13 20:46:25.674245 containerd[1502]: time="2025-01-13T20:46:25.674222061Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:46:25.986715 kubelet[2604]: E0113 20:46:25.986574 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:26.974061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492874831.mount: Deactivated successfully. Jan 13 20:46:27.183883 kubelet[2604]: E0113 20:46:27.183822 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:27.203271 kubelet[2604]: I0113 20:46:27.202651 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-db2f6" podStartSLOduration=3.202632087 podStartE2EDuration="3.202632087s" podCreationTimestamp="2025-01-13 20:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:25.995515442 +0000 UTC m=+6.107130529" watchObservedRunningTime="2025-01-13 20:46:27.202632087 +0000 UTC m=+7.314247174" Jan 13 20:46:27.415065 containerd[1502]: time="2025-01-13T20:46:27.414991049Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.415937 containerd[1502]: time="2025-01-13T20:46:27.415906123Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764321" Jan 13 20:46:27.416911 containerd[1502]: time="2025-01-13T20:46:27.416864170Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.419166 containerd[1502]: time="2025-01-13T20:46:27.419142210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.419794 containerd[1502]: time="2025-01-13T20:46:27.419753103Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.745500495s" Jan 13 20:46:27.419794 containerd[1502]: time="2025-01-13T20:46:27.419789099Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:46:27.422157 containerd[1502]: time="2025-01-13T20:46:27.422106415Z" level=info msg="CreateContainer within sandbox \"aa2ecb7f74e5077b95a76e0ee960e993f89cc55868081d8c01890e009128011d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:46:27.434500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512300712.mount: Deactivated successfully. Jan 13 20:46:27.435493 containerd[1502]: time="2025-01-13T20:46:27.435452536Z" level=info msg="CreateContainer within sandbox \"aa2ecb7f74e5077b95a76e0ee960e993f89cc55868081d8c01890e009128011d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c27a17aeb724ddcf54a54e60ea10dc0ad8df572c3d824023ca343dde80969226\"" Jan 13 20:46:27.435943 containerd[1502]: time="2025-01-13T20:46:27.435916493Z" level=info msg="StartContainer for \"c27a17aeb724ddcf54a54e60ea10dc0ad8df572c3d824023ca343dde80969226\"" Jan 13 20:46:27.475036 systemd[1]: Started cri-containerd-c27a17aeb724ddcf54a54e60ea10dc0ad8df572c3d824023ca343dde80969226.scope - libcontainer container c27a17aeb724ddcf54a54e60ea10dc0ad8df572c3d824023ca343dde80969226. Jan 13 20:46:27.560482 containerd[1502]: time="2025-01-13T20:46:27.560419551Z" level=info msg="StartContainer for \"c27a17aeb724ddcf54a54e60ea10dc0ad8df572c3d824023ca343dde80969226\" returns successfully" Jan 13 20:46:27.991576 kubelet[2604]: E0113 20:46:27.991496 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:28.285244 kubelet[2604]: E0113 20:46:28.285111 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:28.298231 kubelet[2604]: I0113 20:46:28.298000 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-sft9w" podStartSLOduration=1.551148499 podStartE2EDuration="3.29797915s" podCreationTimestamp="2025-01-13 20:46:25 +0000 UTC" firstStartedPulling="2025-01-13 20:46:25.673847377 +0000 UTC m=+5.785462464" lastFinishedPulling="2025-01-13 20:46:27.420678028 +0000 UTC m=+7.532293115" observedRunningTime="2025-01-13 20:46:28.047533144 +0000 UTC m=+8.159148232" watchObservedRunningTime="2025-01-13 20:46:28.29797915 +0000 UTC m=+8.409594257" Jan 13 20:46:28.992756 kubelet[2604]: E0113 20:46:28.992535 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:28.992756 kubelet[2604]: E0113 20:46:28.992722 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:30.384621 systemd[1]: Created slice kubepods-besteffort-pod3ba67851_9194_4721_a9f4_8eb95459b3d7.slice - libcontainer container kubepods-besteffort-pod3ba67851_9194_4721_a9f4_8eb95459b3d7.slice. Jan 13 20:46:30.431602 systemd[1]: Created slice kubepods-besteffort-podac4bb3a9_6cd3_4950_b949_06f2530803c5.slice - libcontainer container kubepods-besteffort-podac4bb3a9_6cd3_4950_b949_06f2530803c5.slice. Jan 13 20:46:30.433103 kubelet[2604]: I0113 20:46:30.433073 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba67851-9194-4721-a9f4-8eb95459b3d7-tigera-ca-bundle\") pod \"calico-typha-8499f7f88f-wwx7d\" (UID: \"3ba67851-9194-4721-a9f4-8eb95459b3d7\") " pod="calico-system/calico-typha-8499f7f88f-wwx7d" Jan 13 20:46:30.433423 kubelet[2604]: I0113 20:46:30.433224 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ba67851-9194-4721-a9f4-8eb95459b3d7-typha-certs\") pod \"calico-typha-8499f7f88f-wwx7d\" (UID: \"3ba67851-9194-4721-a9f4-8eb95459b3d7\") " pod="calico-system/calico-typha-8499f7f88f-wwx7d" Jan 13 20:46:30.433423 kubelet[2604]: I0113 20:46:30.433252 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdr8p\" (UniqueName: \"kubernetes.io/projected/3ba67851-9194-4721-a9f4-8eb95459b3d7-kube-api-access-gdr8p\") pod \"calico-typha-8499f7f88f-wwx7d\" (UID: \"3ba67851-9194-4721-a9f4-8eb95459b3d7\") " pod="calico-system/calico-typha-8499f7f88f-wwx7d" Jan 13 20:46:30.533905 kubelet[2604]: I0113 20:46:30.533847 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-var-run-calico\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.533905 kubelet[2604]: I0113 20:46:30.533910 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-policysync\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534078 kubelet[2604]: I0113 20:46:30.533931 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-cni-bin-dir\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534078 kubelet[2604]: I0113 20:46:30.533951 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ac4bb3a9-6cd3-4950-b949-06f2530803c5-node-certs\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534078 kubelet[2604]: I0113 20:46:30.533974 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-flexvol-driver-host\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534078 kubelet[2604]: I0113 20:46:30.534020 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac4bb3a9-6cd3-4950-b949-06f2530803c5-tigera-ca-bundle\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534078 kubelet[2604]: I0113 20:46:30.534034 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-cni-net-dir\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534186 kubelet[2604]: I0113 20:46:30.534095 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-var-lib-calico\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534186 kubelet[2604]: I0113 20:46:30.534117 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-cni-log-dir\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534186 kubelet[2604]: I0113 20:46:30.534143 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-lib-modules\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534186 kubelet[2604]: I0113 20:46:30.534156 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac4bb3a9-6cd3-4950-b949-06f2530803c5-xtables-lock\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.534186 kubelet[2604]: I0113 20:46:30.534174 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cht4t\" (UniqueName: \"kubernetes.io/projected/ac4bb3a9-6cd3-4950-b949-06f2530803c5-kube-api-access-cht4t\") pod \"calico-node-86jnl\" (UID: \"ac4bb3a9-6cd3-4950-b949-06f2530803c5\") " pod="calico-system/calico-node-86jnl" Jan 13 20:46:30.609893 kubelet[2604]: E0113 20:46:30.609829 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:30.639057 kubelet[2604]: E0113 20:46:30.638303 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.639057 kubelet[2604]: W0113 20:46:30.638330 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.639057 kubelet[2604]: E0113 20:46:30.638364 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.646938 kubelet[2604]: E0113 20:46:30.646906 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.647260 kubelet[2604]: W0113 20:46:30.647073 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.647260 kubelet[2604]: E0113 20:46:30.647109 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.649576 kubelet[2604]: E0113 20:46:30.649561 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.649696 kubelet[2604]: W0113 20:46:30.649643 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.649740 kubelet[2604]: E0113 20:46:30.649693 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.650307 kubelet[2604]: E0113 20:46:30.650201 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.650307 kubelet[2604]: W0113 20:46:30.650212 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.650693 kubelet[2604]: E0113 20:46:30.650388 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.650796 kubelet[2604]: E0113 20:46:30.650783 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.650841 kubelet[2604]: W0113 20:46:30.650831 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.651116 kubelet[2604]: E0113 20:46:30.651100 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.652228 kubelet[2604]: E0113 20:46:30.652213 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.652297 kubelet[2604]: W0113 20:46:30.652286 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.652507 kubelet[2604]: E0113 20:46:30.652484 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.653208 kubelet[2604]: E0113 20:46:30.653148 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.653208 kubelet[2604]: W0113 20:46:30.653158 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.653347 kubelet[2604]: E0113 20:46:30.653314 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.655444 kubelet[2604]: E0113 20:46:30.655355 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.655444 kubelet[2604]: W0113 20:46:30.655369 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.655536 kubelet[2604]: E0113 20:46:30.655476 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.657077 kubelet[2604]: E0113 20:46:30.656723 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.657077 kubelet[2604]: W0113 20:46:30.656749 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.657223 kubelet[2604]: E0113 20:46:30.657202 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.657223 kubelet[2604]: W0113 20:46:30.657219 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.657345 kubelet[2604]: E0113 20:46:30.657323 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.657531 kubelet[2604]: E0113 20:46:30.657516 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.657531 kubelet[2604]: W0113 20:46:30.657527 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.658396 kubelet[2604]: E0113 20:46:30.657622 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.658396 kubelet[2604]: E0113 20:46:30.657931 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.658495 kubelet[2604]: E0113 20:46:30.658479 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.658495 kubelet[2604]: W0113 20:46:30.658490 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.658654 kubelet[2604]: E0113 20:46:30.658635 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.658824 kubelet[2604]: E0113 20:46:30.658807 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.658824 kubelet[2604]: W0113 20:46:30.658819 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.658996 kubelet[2604]: E0113 20:46:30.658887 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.659515 kubelet[2604]: E0113 20:46:30.659495 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.659515 kubelet[2604]: W0113 20:46:30.659510 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.659581 kubelet[2604]: E0113 20:46:30.659551 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.659814 kubelet[2604]: E0113 20:46:30.659720 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.659814 kubelet[2604]: W0113 20:46:30.659731 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.660125 kubelet[2604]: E0113 20:46:30.659917 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.660125 kubelet[2604]: E0113 20:46:30.659991 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.660125 kubelet[2604]: W0113 20:46:30.659997 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.660125 kubelet[2604]: E0113 20:46:30.660082 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.660289 kubelet[2604]: E0113 20:46:30.660266 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.660289 kubelet[2604]: W0113 20:46:30.660279 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.660391 kubelet[2604]: E0113 20:46:30.660357 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.661250 kubelet[2604]: E0113 20:46:30.661205 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.661250 kubelet[2604]: W0113 20:46:30.661217 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.661250 kubelet[2604]: E0113 20:46:30.661252 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.661530 kubelet[2604]: E0113 20:46:30.661500 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.661530 kubelet[2604]: W0113 20:46:30.661510 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.661530 kubelet[2604]: E0113 20:46:30.661528 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.661823 kubelet[2604]: E0113 20:46:30.661790 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.661823 kubelet[2604]: W0113 20:46:30.661802 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.661823 kubelet[2604]: E0113 20:46:30.661811 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.690027 kubelet[2604]: E0113 20:46:30.689993 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:30.690623 containerd[1502]: time="2025-01-13T20:46:30.690579156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8499f7f88f-wwx7d,Uid:3ba67851-9194-4721-a9f4-8eb95459b3d7,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:30.706000 kubelet[2604]: E0113 20:46:30.705955 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.706000 kubelet[2604]: W0113 20:46:30.705979 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.706000 kubelet[2604]: E0113 20:46:30.705998 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.707071 kubelet[2604]: E0113 20:46:30.707057 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.707071 kubelet[2604]: W0113 20:46:30.707070 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.707166 kubelet[2604]: E0113 20:46:30.707079 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.708187 kubelet[2604]: E0113 20:46:30.708168 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.708187 kubelet[2604]: W0113 20:46:30.708182 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.708260 kubelet[2604]: E0113 20:46:30.708192 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.708782 kubelet[2604]: E0113 20:46:30.708764 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.708782 kubelet[2604]: W0113 20:46:30.708778 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.708847 kubelet[2604]: E0113 20:46:30.708787 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.709573 kubelet[2604]: E0113 20:46:30.709557 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.709573 kubelet[2604]: W0113 20:46:30.709571 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.709642 kubelet[2604]: E0113 20:46:30.709581 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.710542 kubelet[2604]: E0113 20:46:30.709980 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.710542 kubelet[2604]: W0113 20:46:30.709990 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.710542 kubelet[2604]: E0113 20:46:30.710030 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.710542 kubelet[2604]: E0113 20:46:30.710442 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.710542 kubelet[2604]: W0113 20:46:30.710450 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.710542 kubelet[2604]: E0113 20:46:30.710500 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.711001 kubelet[2604]: E0113 20:46:30.710984 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.711001 kubelet[2604]: W0113 20:46:30.710997 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.711060 kubelet[2604]: E0113 20:46:30.711006 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.711564 kubelet[2604]: E0113 20:46:30.711548 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.711564 kubelet[2604]: W0113 20:46:30.711561 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.711627 kubelet[2604]: E0113 20:46:30.711571 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.711948 kubelet[2604]: E0113 20:46:30.711931 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.711948 kubelet[2604]: W0113 20:46:30.711944 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.712014 kubelet[2604]: E0113 20:46:30.711953 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.712234 kubelet[2604]: E0113 20:46:30.712217 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.712268 kubelet[2604]: W0113 20:46:30.712255 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.712268 kubelet[2604]: E0113 20:46:30.712264 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.712991 kubelet[2604]: E0113 20:46:30.712976 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.712991 kubelet[2604]: W0113 20:46:30.712988 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.713045 kubelet[2604]: E0113 20:46:30.712999 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.714126 kubelet[2604]: E0113 20:46:30.713316 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.714126 kubelet[2604]: W0113 20:46:30.713326 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.714126 kubelet[2604]: E0113 20:46:30.713335 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.714126 kubelet[2604]: E0113 20:46:30.713617 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.714126 kubelet[2604]: W0113 20:46:30.713627 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.714126 kubelet[2604]: E0113 20:46:30.713638 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.714308 kubelet[2604]: E0113 20:46:30.714180 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.714308 kubelet[2604]: W0113 20:46:30.714208 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.714308 kubelet[2604]: E0113 20:46:30.714241 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.714652 kubelet[2604]: E0113 20:46:30.714624 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.714652 kubelet[2604]: W0113 20:46:30.714647 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.714717 kubelet[2604]: E0113 20:46:30.714662 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.715166 kubelet[2604]: E0113 20:46:30.715137 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.715166 kubelet[2604]: W0113 20:46:30.715156 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.715231 kubelet[2604]: E0113 20:46:30.715170 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.715437 kubelet[2604]: E0113 20:46:30.715423 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.715557 kubelet[2604]: W0113 20:46:30.715495 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.715557 kubelet[2604]: E0113 20:46:30.715511 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.715904 kubelet[2604]: E0113 20:46:30.715848 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.715904 kubelet[2604]: W0113 20:46:30.715858 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.716029 kubelet[2604]: E0113 20:46:30.715867 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.716293 kubelet[2604]: E0113 20:46:30.716273 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.716342 kubelet[2604]: W0113 20:46:30.716291 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.716342 kubelet[2604]: E0113 20:46:30.716307 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.719381 containerd[1502]: time="2025-01-13T20:46:30.719276567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:30.719381 containerd[1502]: time="2025-01-13T20:46:30.719344320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:30.719381 containerd[1502]: time="2025-01-13T20:46:30.719357658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.720070 containerd[1502]: time="2025-01-13T20:46:30.720018164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.735411 kubelet[2604]: E0113 20:46:30.735338 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:30.739564 containerd[1502]: time="2025-01-13T20:46:30.738438357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86jnl,Uid:ac4bb3a9-6cd3-4950-b949-06f2530803c5,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:30.739695 kubelet[2604]: E0113 20:46:30.738918 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.739695 kubelet[2604]: W0113 20:46:30.738936 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.739695 kubelet[2604]: E0113 20:46:30.738958 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.739695 kubelet[2604]: I0113 20:46:30.738994 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aab8c9ce-5c63-4682-8391-52de7028ab06-varrun\") pod \"csi-node-driver-4zbbn\" (UID: \"aab8c9ce-5c63-4682-8391-52de7028ab06\") " pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:30.739695 kubelet[2604]: E0113 20:46:30.739409 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.739695 kubelet[2604]: W0113 20:46:30.739423 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.739695 kubelet[2604]: E0113 20:46:30.739442 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.739695 kubelet[2604]: I0113 20:46:30.739460 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aab8c9ce-5c63-4682-8391-52de7028ab06-socket-dir\") pod \"csi-node-driver-4zbbn\" (UID: \"aab8c9ce-5c63-4682-8391-52de7028ab06\") " pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:30.740024 kubelet[2604]: E0113 20:46:30.740007 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.740024 kubelet[2604]: W0113 20:46:30.740020 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.740075 kubelet[2604]: E0113 20:46:30.740031 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.740075 kubelet[2604]: I0113 20:46:30.740046 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkw6w\" (UniqueName: \"kubernetes.io/projected/aab8c9ce-5c63-4682-8391-52de7028ab06-kube-api-access-lkw6w\") pod \"csi-node-driver-4zbbn\" (UID: \"aab8c9ce-5c63-4682-8391-52de7028ab06\") " pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:30.740644 kubelet[2604]: E0113 20:46:30.740615 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.740644 kubelet[2604]: W0113 20:46:30.740631 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.741102 kubelet[2604]: E0113 20:46:30.740751 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.741145 kubelet[2604]: I0113 20:46:30.741111 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aab8c9ce-5c63-4682-8391-52de7028ab06-kubelet-dir\") pod \"csi-node-driver-4zbbn\" (UID: \"aab8c9ce-5c63-4682-8391-52de7028ab06\") " pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:30.741518 kubelet[2604]: E0113 20:46:30.741311 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.741652 kubelet[2604]: W0113 20:46:30.741564 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.741731 kubelet[2604]: E0113 20:46:30.741715 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.742028 kubelet[2604]: E0113 20:46:30.742006 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.742028 kubelet[2604]: W0113 20:46:30.742023 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.742090 kubelet[2604]: E0113 20:46:30.742040 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.742438 kubelet[2604]: E0113 20:46:30.742250 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.742438 kubelet[2604]: W0113 20:46:30.742261 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.742438 kubelet[2604]: E0113 20:46:30.742273 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.742608 kubelet[2604]: E0113 20:46:30.742594 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.742675 kubelet[2604]: W0113 20:46:30.742662 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.742751 kubelet[2604]: E0113 20:46:30.742736 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.742848 kubelet[2604]: I0113 20:46:30.742822 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aab8c9ce-5c63-4682-8391-52de7028ab06-registration-dir\") pod \"csi-node-driver-4zbbn\" (UID: \"aab8c9ce-5c63-4682-8391-52de7028ab06\") " pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:30.743318 kubelet[2604]: E0113 20:46:30.743106 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.743318 kubelet[2604]: W0113 20:46:30.743136 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744219 kubelet[2604]: E0113 20:46:30.743358 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.744219 kubelet[2604]: E0113 20:46:30.743686 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.744219 kubelet[2604]: W0113 20:46:30.743698 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744219 kubelet[2604]: E0113 20:46:30.743710 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.744219 kubelet[2604]: E0113 20:46:30.744077 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.744219 kubelet[2604]: W0113 20:46:30.744087 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744219 kubelet[2604]: E0113 20:46:30.744098 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.744431 kubelet[2604]: E0113 20:46:30.744298 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.744431 kubelet[2604]: W0113 20:46:30.744307 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744431 kubelet[2604]: E0113 20:46:30.744317 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.744577 kubelet[2604]: E0113 20:46:30.744524 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.744577 kubelet[2604]: W0113 20:46:30.744541 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744577 kubelet[2604]: E0113 20:46:30.744552 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.744781 kubelet[2604]: E0113 20:46:30.744749 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.744781 kubelet[2604]: W0113 20:46:30.744766 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.744781 kubelet[2604]: E0113 20:46:30.744776 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.745019 kubelet[2604]: E0113 20:46:30.745005 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.745019 kubelet[2604]: W0113 20:46:30.745018 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.745075 kubelet[2604]: E0113 20:46:30.745029 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.746231 systemd[1]: Started cri-containerd-b460150a964466886df0460f8586cbd2ce09561c12864315fd7df8b9e39da9ea.scope - libcontainer container b460150a964466886df0460f8586cbd2ce09561c12864315fd7df8b9e39da9ea. Jan 13 20:46:30.786958 containerd[1502]: time="2025-01-13T20:46:30.786836917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8499f7f88f-wwx7d,Uid:3ba67851-9194-4721-a9f4-8eb95459b3d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b460150a964466886df0460f8586cbd2ce09561c12864315fd7df8b9e39da9ea\"" Jan 13 20:46:30.787597 kubelet[2604]: E0113 20:46:30.787565 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:30.788413 containerd[1502]: time="2025-01-13T20:46:30.788386750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:46:30.844736 kubelet[2604]: E0113 20:46:30.844699 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.844736 kubelet[2604]: W0113 20:46:30.844725 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.844736 kubelet[2604]: E0113 20:46:30.844748 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.845227 kubelet[2604]: E0113 20:46:30.845060 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.845227 kubelet[2604]: W0113 20:46:30.845070 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.845227 kubelet[2604]: E0113 20:46:30.845088 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.845345 kubelet[2604]: E0113 20:46:30.845322 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.845345 kubelet[2604]: W0113 20:46:30.845339 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.845452 kubelet[2604]: E0113 20:46:30.845360 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.845719 kubelet[2604]: E0113 20:46:30.845665 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.845864 kubelet[2604]: W0113 20:46:30.845733 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.845864 kubelet[2604]: E0113 20:46:30.845774 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.846560 kubelet[2604]: E0113 20:46:30.846169 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.846560 kubelet[2604]: W0113 20:46:30.846186 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.846560 kubelet[2604]: E0113 20:46:30.846220 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.846560 kubelet[2604]: E0113 20:46:30.846514 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.846560 kubelet[2604]: W0113 20:46:30.846528 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.846754 kubelet[2604]: E0113 20:46:30.846719 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.847027 kubelet[2604]: E0113 20:46:30.847004 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.847027 kubelet[2604]: W0113 20:46:30.847019 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.847139 kubelet[2604]: E0113 20:46:30.847064 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.847300 kubelet[2604]: E0113 20:46:30.847284 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.847347 kubelet[2604]: W0113 20:46:30.847309 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.847347 kubelet[2604]: E0113 20:46:30.847340 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.847595 kubelet[2604]: E0113 20:46:30.847571 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.847595 kubelet[2604]: W0113 20:46:30.847583 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.847671 kubelet[2604]: E0113 20:46:30.847614 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.847898 kubelet[2604]: E0113 20:46:30.847858 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.847898 kubelet[2604]: W0113 20:46:30.847898 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.847998 kubelet[2604]: E0113 20:46:30.847975 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.848179 kubelet[2604]: E0113 20:46:30.848150 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.848179 kubelet[2604]: W0113 20:46:30.848172 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.848262 kubelet[2604]: E0113 20:46:30.848213 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.848391 kubelet[2604]: E0113 20:46:30.848372 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.848391 kubelet[2604]: W0113 20:46:30.848383 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.848472 kubelet[2604]: E0113 20:46:30.848445 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.848677 kubelet[2604]: E0113 20:46:30.848626 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.848677 kubelet[2604]: W0113 20:46:30.848640 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.848779 kubelet[2604]: E0113 20:46:30.848681 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.848968 kubelet[2604]: E0113 20:46:30.848917 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.848968 kubelet[2604]: W0113 20:46:30.848927 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.848968 kubelet[2604]: E0113 20:46:30.848939 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.849340 kubelet[2604]: E0113 20:46:30.849216 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.849340 kubelet[2604]: W0113 20:46:30.849232 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.849340 kubelet[2604]: E0113 20:46:30.849301 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.849710 kubelet[2604]: E0113 20:46:30.849661 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.849710 kubelet[2604]: W0113 20:46:30.849708 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.849841 kubelet[2604]: E0113 20:46:30.849822 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.850111 kubelet[2604]: E0113 20:46:30.850093 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.850111 kubelet[2604]: W0113 20:46:30.850106 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.850195 kubelet[2604]: E0113 20:46:30.850139 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.850433 kubelet[2604]: E0113 20:46:30.850416 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.850433 kubelet[2604]: W0113 20:46:30.850428 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.850524 kubelet[2604]: E0113 20:46:30.850462 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.850741 kubelet[2604]: E0113 20:46:30.850726 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.850741 kubelet[2604]: W0113 20:46:30.850737 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.850829 kubelet[2604]: E0113 20:46:30.850769 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.851023 kubelet[2604]: E0113 20:46:30.850993 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.851023 kubelet[2604]: W0113 20:46:30.851005 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.851129 kubelet[2604]: E0113 20:46:30.851111 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.851240 kubelet[2604]: E0113 20:46:30.851226 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.851240 kubelet[2604]: W0113 20:46:30.851235 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.851323 kubelet[2604]: E0113 20:46:30.851291 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.851491 kubelet[2604]: E0113 20:46:30.851476 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.851491 kubelet[2604]: W0113 20:46:30.851488 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.851591 kubelet[2604]: E0113 20:46:30.851517 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.851784 kubelet[2604]: E0113 20:46:30.851766 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.851784 kubelet[2604]: W0113 20:46:30.851780 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.851867 kubelet[2604]: E0113 20:46:30.851798 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.852103 kubelet[2604]: E0113 20:46:30.852087 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.852103 kubelet[2604]: W0113 20:46:30.852099 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.852180 kubelet[2604]: E0113 20:46:30.852119 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.852621 kubelet[2604]: E0113 20:46:30.852601 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.852621 kubelet[2604]: W0113 20:46:30.852619 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.852760 kubelet[2604]: E0113 20:46:30.852635 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:30.860219 kubelet[2604]: E0113 20:46:30.860190 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:30.860219 kubelet[2604]: W0113 20:46:30.860210 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:30.860325 kubelet[2604]: E0113 20:46:30.860226 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.097382 containerd[1502]: time="2025-01-13T20:46:31.097256364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:31.097579 containerd[1502]: time="2025-01-13T20:46:31.097440732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:31.097579 containerd[1502]: time="2025-01-13T20:46:31.097488903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:31.097660 containerd[1502]: time="2025-01-13T20:46:31.097613945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:31.118093 systemd[1]: Started cri-containerd-537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce.scope - libcontainer container 537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce. Jan 13 20:46:31.146168 containerd[1502]: time="2025-01-13T20:46:31.146116591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86jnl,Uid:ac4bb3a9-6cd3-4950-b949-06f2530803c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\"" Jan 13 20:46:31.146841 kubelet[2604]: E0113 20:46:31.146817 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:31.441516 kubelet[2604]: E0113 20:46:31.441398 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:31.521812 kubelet[2604]: E0113 20:46:31.521782 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.521812 kubelet[2604]: W0113 20:46:31.521803 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.521812 kubelet[2604]: E0113 20:46:31.521824 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.522086 kubelet[2604]: E0113 20:46:31.522068 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.522086 kubelet[2604]: W0113 20:46:31.522080 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.522160 kubelet[2604]: E0113 20:46:31.522089 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.522342 kubelet[2604]: E0113 20:46:31.522328 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.522391 kubelet[2604]: W0113 20:46:31.522342 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.522391 kubelet[2604]: E0113 20:46:31.522352 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.522584 kubelet[2604]: E0113 20:46:31.522558 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.522584 kubelet[2604]: W0113 20:46:31.522569 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.522584 kubelet[2604]: E0113 20:46:31.522579 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.522822 kubelet[2604]: E0113 20:46:31.522801 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.522822 kubelet[2604]: W0113 20:46:31.522811 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.522822 kubelet[2604]: E0113 20:46:31.522818 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.523034 kubelet[2604]: E0113 20:46:31.523020 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.523034 kubelet[2604]: W0113 20:46:31.523030 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.523102 kubelet[2604]: E0113 20:46:31.523038 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.523243 kubelet[2604]: E0113 20:46:31.523227 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.523243 kubelet[2604]: W0113 20:46:31.523240 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.523290 kubelet[2604]: E0113 20:46:31.523252 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.523441 kubelet[2604]: E0113 20:46:31.523427 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.523465 kubelet[2604]: W0113 20:46:31.523449 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.523465 kubelet[2604]: E0113 20:46:31.523459 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.523649 kubelet[2604]: E0113 20:46:31.523625 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.523677 kubelet[2604]: W0113 20:46:31.523650 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.523677 kubelet[2604]: E0113 20:46:31.523660 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.523894 kubelet[2604]: E0113 20:46:31.523864 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.523920 kubelet[2604]: W0113 20:46:31.523894 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.523920 kubelet[2604]: E0113 20:46:31.523905 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.524129 kubelet[2604]: E0113 20:46:31.524106 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.524129 kubelet[2604]: W0113 20:46:31.524125 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.524176 kubelet[2604]: E0113 20:46:31.524136 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.524329 kubelet[2604]: E0113 20:46:31.524317 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.524329 kubelet[2604]: W0113 20:46:31.524325 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.524377 kubelet[2604]: E0113 20:46:31.524333 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.524508 kubelet[2604]: E0113 20:46:31.524496 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.524508 kubelet[2604]: W0113 20:46:31.524504 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.524551 kubelet[2604]: E0113 20:46:31.524511 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.524707 kubelet[2604]: E0113 20:46:31.524693 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.524707 kubelet[2604]: W0113 20:46:31.524703 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.524865 kubelet[2604]: E0113 20:46:31.524712 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.524931 kubelet[2604]: E0113 20:46:31.524894 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.524931 kubelet[2604]: W0113 20:46:31.524901 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.524931 kubelet[2604]: E0113 20:46:31.524908 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.525196 kubelet[2604]: E0113 20:46:31.525182 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.525196 kubelet[2604]: W0113 20:46:31.525191 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.525249 kubelet[2604]: E0113 20:46:31.525198 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.525478 kubelet[2604]: E0113 20:46:31.525423 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.525478 kubelet[2604]: W0113 20:46:31.525457 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.525542 kubelet[2604]: E0113 20:46:31.525486 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.525740 kubelet[2604]: E0113 20:46:31.525718 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.525740 kubelet[2604]: W0113 20:46:31.525736 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.525821 kubelet[2604]: E0113 20:46:31.525745 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.525974 kubelet[2604]: E0113 20:46:31.525960 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.525974 kubelet[2604]: W0113 20:46:31.525971 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.526030 kubelet[2604]: E0113 20:46:31.525978 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.526213 kubelet[2604]: E0113 20:46:31.526198 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.526213 kubelet[2604]: W0113 20:46:31.526209 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.526283 kubelet[2604]: E0113 20:46:31.526219 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.526456 kubelet[2604]: E0113 20:46:31.526442 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.526456 kubelet[2604]: W0113 20:46:31.526455 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.526529 kubelet[2604]: E0113 20:46:31.526466 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.526689 kubelet[2604]: E0113 20:46:31.526672 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.526689 kubelet[2604]: W0113 20:46:31.526684 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.526751 kubelet[2604]: E0113 20:46:31.526695 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.526963 kubelet[2604]: E0113 20:46:31.526946 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.526963 kubelet[2604]: W0113 20:46:31.526957 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.526963 kubelet[2604]: E0113 20:46:31.526965 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.527158 kubelet[2604]: E0113 20:46:31.527144 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.527158 kubelet[2604]: W0113 20:46:31.527154 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.527204 kubelet[2604]: E0113 20:46:31.527162 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:31.527343 kubelet[2604]: E0113 20:46:31.527330 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:31.527343 kubelet[2604]: W0113 20:46:31.527339 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:31.527398 kubelet[2604]: E0113 20:46:31.527346 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:32.256651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861092644.mount: Deactivated successfully. Jan 13 20:46:32.961084 kubelet[2604]: E0113 20:46:32.961032 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:33.820010 containerd[1502]: time="2025-01-13T20:46:33.819955117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:33.820640 containerd[1502]: time="2025-01-13T20:46:33.820599791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:46:33.821596 containerd[1502]: time="2025-01-13T20:46:33.821571024Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:33.824132 containerd[1502]: time="2025-01-13T20:46:33.824091486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:33.824657 containerd[1502]: time="2025-01-13T20:46:33.824631660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.036218625s" Jan 13 20:46:33.824692 containerd[1502]: time="2025-01-13T20:46:33.824658947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:46:33.825559 containerd[1502]: time="2025-01-13T20:46:33.825532086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:46:33.833298 containerd[1502]: time="2025-01-13T20:46:33.833248402Z" level=info msg="CreateContainer within sandbox \"b460150a964466886df0460f8586cbd2ce09561c12864315fd7df8b9e39da9ea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:46:33.849240 containerd[1502]: time="2025-01-13T20:46:33.849193782Z" level=info msg="CreateContainer within sandbox \"b460150a964466886df0460f8586cbd2ce09561c12864315fd7df8b9e39da9ea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eb6843b61b0894dde3dccd0261e7b97bc4e4c8228f5d05de645f724df74f0b77\"" Jan 13 20:46:33.849628 containerd[1502]: time="2025-01-13T20:46:33.849599267Z" level=info msg="StartContainer for \"eb6843b61b0894dde3dccd0261e7b97bc4e4c8228f5d05de645f724df74f0b77\"" Jan 13 20:46:33.876079 systemd[1]: Started cri-containerd-eb6843b61b0894dde3dccd0261e7b97bc4e4c8228f5d05de645f724df74f0b77.scope - libcontainer container eb6843b61b0894dde3dccd0261e7b97bc4e4c8228f5d05de645f724df74f0b77. Jan 13 20:46:33.913411 containerd[1502]: time="2025-01-13T20:46:33.913372930Z" level=info msg="StartContainer for \"eb6843b61b0894dde3dccd0261e7b97bc4e4c8228f5d05de645f724df74f0b77\" returns successfully" Jan 13 20:46:34.002119 kubelet[2604]: E0113 20:46:34.002076 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:34.047357 kubelet[2604]: E0113 20:46:34.047322 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.047357 kubelet[2604]: W0113 20:46:34.047347 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.047357 kubelet[2604]: E0113 20:46:34.047367 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.047716 kubelet[2604]: E0113 20:46:34.047694 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.047716 kubelet[2604]: W0113 20:46:34.047706 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.047716 kubelet[2604]: E0113 20:46:34.047716 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.048054 kubelet[2604]: E0113 20:46:34.048022 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.048092 kubelet[2604]: W0113 20:46:34.048053 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.048092 kubelet[2604]: E0113 20:46:34.048083 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.048366 kubelet[2604]: E0113 20:46:34.048350 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.048366 kubelet[2604]: W0113 20:46:34.048361 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.048430 kubelet[2604]: E0113 20:46:34.048371 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.048588 kubelet[2604]: E0113 20:46:34.048573 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.048588 kubelet[2604]: W0113 20:46:34.048584 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.048643 kubelet[2604]: E0113 20:46:34.048592 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.048802 kubelet[2604]: E0113 20:46:34.048779 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.048802 kubelet[2604]: W0113 20:46:34.048794 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.048802 kubelet[2604]: E0113 20:46:34.048803 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.049016 kubelet[2604]: E0113 20:46:34.049003 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.049016 kubelet[2604]: W0113 20:46:34.049013 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.049065 kubelet[2604]: E0113 20:46:34.049023 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.049223 kubelet[2604]: E0113 20:46:34.049209 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.049223 kubelet[2604]: W0113 20:46:34.049219 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.049269 kubelet[2604]: E0113 20:46:34.049227 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.049442 kubelet[2604]: E0113 20:46:34.049429 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.049442 kubelet[2604]: W0113 20:46:34.049440 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.049605 kubelet[2604]: E0113 20:46:34.049448 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.049704 kubelet[2604]: E0113 20:46:34.049689 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.049704 kubelet[2604]: W0113 20:46:34.049700 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.049767 kubelet[2604]: E0113 20:46:34.049709 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.049928 kubelet[2604]: E0113 20:46:34.049915 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.049928 kubelet[2604]: W0113 20:46:34.049925 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.049980 kubelet[2604]: E0113 20:46:34.049933 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.050153 kubelet[2604]: E0113 20:46:34.050139 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.050153 kubelet[2604]: W0113 20:46:34.050151 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.050204 kubelet[2604]: E0113 20:46:34.050170 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.050372 kubelet[2604]: E0113 20:46:34.050358 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.050372 kubelet[2604]: W0113 20:46:34.050370 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.050429 kubelet[2604]: E0113 20:46:34.050379 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.050585 kubelet[2604]: E0113 20:46:34.050572 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.050585 kubelet[2604]: W0113 20:46:34.050582 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.050633 kubelet[2604]: E0113 20:46:34.050590 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.050777 kubelet[2604]: E0113 20:46:34.050764 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.050777 kubelet[2604]: W0113 20:46:34.050774 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.050824 kubelet[2604]: E0113 20:46:34.050782 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.072178 kubelet[2604]: E0113 20:46:34.072097 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.072178 kubelet[2604]: W0113 20:46:34.072111 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.072178 kubelet[2604]: E0113 20:46:34.072131 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.072444 kubelet[2604]: E0113 20:46:34.072428 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.072486 kubelet[2604]: W0113 20:46:34.072441 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.072516 kubelet[2604]: E0113 20:46:34.072490 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.072758 kubelet[2604]: E0113 20:46:34.072747 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.072758 kubelet[2604]: W0113 20:46:34.072756 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.072814 kubelet[2604]: E0113 20:46:34.072768 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.073115 kubelet[2604]: E0113 20:46:34.073102 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.073115 kubelet[2604]: W0113 20:46:34.073112 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.073187 kubelet[2604]: E0113 20:46:34.073135 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.073367 kubelet[2604]: E0113 20:46:34.073343 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.073367 kubelet[2604]: W0113 20:46:34.073353 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.073668 kubelet[2604]: E0113 20:46:34.073387 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.073668 kubelet[2604]: E0113 20:46:34.073577 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.073668 kubelet[2604]: W0113 20:46:34.073586 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.073668 kubelet[2604]: E0113 20:46:34.073670 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.073811 kubelet[2604]: E0113 20:46:34.073786 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.073811 kubelet[2604]: W0113 20:46:34.073795 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.073856 kubelet[2604]: E0113 20:46:34.073834 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.074021 kubelet[2604]: E0113 20:46:34.074007 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.074021 kubelet[2604]: W0113 20:46:34.074018 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.074072 kubelet[2604]: E0113 20:46:34.074045 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.074223 kubelet[2604]: E0113 20:46:34.074202 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.074223 kubelet[2604]: W0113 20:46:34.074213 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.074274 kubelet[2604]: E0113 20:46:34.074227 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.074482 kubelet[2604]: E0113 20:46:34.074458 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.074482 kubelet[2604]: W0113 20:46:34.074472 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.074527 kubelet[2604]: E0113 20:46:34.074486 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.074653 kubelet[2604]: E0113 20:46:34.074641 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.074653 kubelet[2604]: W0113 20:46:34.074650 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.074710 kubelet[2604]: E0113 20:46:34.074662 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.074834 kubelet[2604]: E0113 20:46:34.074820 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.074834 kubelet[2604]: W0113 20:46:34.074829 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.074916 kubelet[2604]: E0113 20:46:34.074840 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.075053 kubelet[2604]: E0113 20:46:34.075039 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.075053 kubelet[2604]: W0113 20:46:34.075050 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.075101 kubelet[2604]: E0113 20:46:34.075062 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.075235 kubelet[2604]: E0113 20:46:34.075222 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.075235 kubelet[2604]: W0113 20:46:34.075232 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.075288 kubelet[2604]: E0113 20:46:34.075243 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.075415 kubelet[2604]: E0113 20:46:34.075401 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.075415 kubelet[2604]: W0113 20:46:34.075412 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.075469 kubelet[2604]: E0113 20:46:34.075424 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.075586 kubelet[2604]: E0113 20:46:34.075573 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.075586 kubelet[2604]: W0113 20:46:34.075582 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.075633 kubelet[2604]: E0113 20:46:34.075590 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.075760 kubelet[2604]: E0113 20:46:34.075746 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.075760 kubelet[2604]: W0113 20:46:34.075756 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.075899 kubelet[2604]: E0113 20:46:34.075765 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.077056 kubelet[2604]: I0113 20:46:34.077010 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8499f7f88f-wwx7d" podStartSLOduration=1.039732959 podStartE2EDuration="4.076999755s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:30.788160954 +0000 UTC m=+10.899776041" lastFinishedPulling="2025-01-13 20:46:33.825427748 +0000 UTC m=+13.937042837" observedRunningTime="2025-01-13 20:46:34.076725976 +0000 UTC m=+14.188341063" watchObservedRunningTime="2025-01-13 20:46:34.076999755 +0000 UTC m=+14.188614842" Jan 13 20:46:34.077309 kubelet[2604]: E0113 20:46:34.077250 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:34.077309 kubelet[2604]: W0113 20:46:34.077265 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:34.077309 kubelet[2604]: E0113 20:46:34.077276 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:34.960669 kubelet[2604]: E0113 20:46:34.960614 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:35.002865 kubelet[2604]: I0113 20:46:35.002838 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:35.003245 kubelet[2604]: E0113 20:46:35.003132 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:35.058075 kubelet[2604]: E0113 20:46:35.058032 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.058075 kubelet[2604]: W0113 20:46:35.058059 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.058075 kubelet[2604]: E0113 20:46:35.058084 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.058355 kubelet[2604]: E0113 20:46:35.058338 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.058355 kubelet[2604]: W0113 20:46:35.058348 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.058406 kubelet[2604]: E0113 20:46:35.058356 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.058643 kubelet[2604]: E0113 20:46:35.058613 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.058643 kubelet[2604]: W0113 20:46:35.058634 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.058730 kubelet[2604]: E0113 20:46:35.058651 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.058890 kubelet[2604]: E0113 20:46:35.058851 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.058890 kubelet[2604]: W0113 20:46:35.058862 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.058890 kubelet[2604]: E0113 20:46:35.058870 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059068 kubelet[2604]: E0113 20:46:35.059055 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059068 kubelet[2604]: W0113 20:46:35.059065 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059175 kubelet[2604]: E0113 20:46:35.059073 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059247 kubelet[2604]: E0113 20:46:35.059233 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059247 kubelet[2604]: W0113 20:46:35.059243 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059308 kubelet[2604]: E0113 20:46:35.059251 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059427 kubelet[2604]: E0113 20:46:35.059414 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059427 kubelet[2604]: W0113 20:46:35.059424 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059474 kubelet[2604]: E0113 20:46:35.059431 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059593 kubelet[2604]: E0113 20:46:35.059580 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059593 kubelet[2604]: W0113 20:46:35.059589 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059633 kubelet[2604]: E0113 20:46:35.059597 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059762 kubelet[2604]: E0113 20:46:35.059748 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059762 kubelet[2604]: W0113 20:46:35.059758 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059811 kubelet[2604]: E0113 20:46:35.059766 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.059943 kubelet[2604]: E0113 20:46:35.059929 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.059943 kubelet[2604]: W0113 20:46:35.059939 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.059992 kubelet[2604]: E0113 20:46:35.059946 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.060117 kubelet[2604]: E0113 20:46:35.060103 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.060117 kubelet[2604]: W0113 20:46:35.060113 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.060163 kubelet[2604]: E0113 20:46:35.060120 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.060285 kubelet[2604]: E0113 20:46:35.060271 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.060285 kubelet[2604]: W0113 20:46:35.060281 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.060343 kubelet[2604]: E0113 20:46:35.060289 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.060460 kubelet[2604]: E0113 20:46:35.060447 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.060460 kubelet[2604]: W0113 20:46:35.060457 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.060513 kubelet[2604]: E0113 20:46:35.060465 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.060637 kubelet[2604]: E0113 20:46:35.060622 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.060637 kubelet[2604]: W0113 20:46:35.060634 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.060678 kubelet[2604]: E0113 20:46:35.060642 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.060802 kubelet[2604]: E0113 20:46:35.060789 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.060802 kubelet[2604]: W0113 20:46:35.060799 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.060851 kubelet[2604]: E0113 20:46:35.060807 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.079228 kubelet[2604]: E0113 20:46:35.079199 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.079228 kubelet[2604]: W0113 20:46:35.079216 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.079228 kubelet[2604]: E0113 20:46:35.079231 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.079529 kubelet[2604]: E0113 20:46:35.079510 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.079529 kubelet[2604]: W0113 20:46:35.079524 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.079595 kubelet[2604]: E0113 20:46:35.079545 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.079863 kubelet[2604]: E0113 20:46:35.079829 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.079924 kubelet[2604]: W0113 20:46:35.079906 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.079948 kubelet[2604]: E0113 20:46:35.079930 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.080500 kubelet[2604]: E0113 20:46:35.080474 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.080551 kubelet[2604]: W0113 20:46:35.080491 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.080551 kubelet[2604]: E0113 20:46:35.080523 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.080804 kubelet[2604]: E0113 20:46:35.080788 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.080804 kubelet[2604]: W0113 20:46:35.080800 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.080895 kubelet[2604]: E0113 20:46:35.080817 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.081067 kubelet[2604]: E0113 20:46:35.081051 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.081067 kubelet[2604]: W0113 20:46:35.081063 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.081131 kubelet[2604]: E0113 20:46:35.081078 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.081356 kubelet[2604]: E0113 20:46:35.081343 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.081356 kubelet[2604]: W0113 20:46:35.081354 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.081418 kubelet[2604]: E0113 20:46:35.081397 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.081556 kubelet[2604]: E0113 20:46:35.081544 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.081588 kubelet[2604]: W0113 20:46:35.081558 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.081647 kubelet[2604]: E0113 20:46:35.081591 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.081749 kubelet[2604]: E0113 20:46:35.081739 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.081749 kubelet[2604]: W0113 20:46:35.081748 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.081911 kubelet[2604]: E0113 20:46:35.081782 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.081978 kubelet[2604]: E0113 20:46:35.081968 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.081978 kubelet[2604]: W0113 20:46:35.081977 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.082029 kubelet[2604]: E0113 20:46:35.081990 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.082272 kubelet[2604]: E0113 20:46:35.082256 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.082272 kubelet[2604]: W0113 20:46:35.082269 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.082338 kubelet[2604]: E0113 20:46:35.082283 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.082460 kubelet[2604]: E0113 20:46:35.082449 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.082460 kubelet[2604]: W0113 20:46:35.082459 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.082513 kubelet[2604]: E0113 20:46:35.082473 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.082711 kubelet[2604]: E0113 20:46:35.082695 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.082711 kubelet[2604]: W0113 20:46:35.082708 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.082757 kubelet[2604]: E0113 20:46:35.082723 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.083007 kubelet[2604]: E0113 20:46:35.082993 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.083007 kubelet[2604]: W0113 20:46:35.083004 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.083275 kubelet[2604]: E0113 20:46:35.083031 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.083275 kubelet[2604]: E0113 20:46:35.083254 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.083275 kubelet[2604]: W0113 20:46:35.083263 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.083349 kubelet[2604]: E0113 20:46:35.083276 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.083513 kubelet[2604]: E0113 20:46:35.083503 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.083538 kubelet[2604]: W0113 20:46:35.083511 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.083538 kubelet[2604]: E0113 20:46:35.083523 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.083775 kubelet[2604]: E0113 20:46:35.083759 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.083803 kubelet[2604]: W0113 20:46:35.083785 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.083830 kubelet[2604]: E0113 20:46:35.083801 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.084009 kubelet[2604]: E0113 20:46:35.083997 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:46:35.084009 kubelet[2604]: W0113 20:46:35.084008 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:46:35.084069 kubelet[2604]: E0113 20:46:35.084016 2604 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:35.731389 containerd[1502]: time="2025-01-13T20:46:35.731340550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:35.732304 containerd[1502]: time="2025-01-13T20:46:35.732267495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:46:35.737505 containerd[1502]: time="2025-01-13T20:46:35.737477195Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:35.739997 containerd[1502]: time="2025-01-13T20:46:35.739941809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:35.740426 containerd[1502]: time="2025-01-13T20:46:35.740388292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.914831945s" Jan 13 20:46:35.740466 containerd[1502]: time="2025-01-13T20:46:35.740427353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:46:35.742310 containerd[1502]: time="2025-01-13T20:46:35.742277427Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:46:35.759818 containerd[1502]: time="2025-01-13T20:46:35.759764314Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341\"" Jan 13 20:46:35.760239 containerd[1502]: time="2025-01-13T20:46:35.760203993Z" level=info msg="StartContainer for \"7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341\"" Jan 13 20:46:35.794072 systemd[1]: Started cri-containerd-7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341.scope - libcontainer container 7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341. Jan 13 20:46:35.827615 containerd[1502]: time="2025-01-13T20:46:35.827567611Z" level=info msg="StartContainer for \"7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341\" returns successfully" Jan 13 20:46:35.856978 systemd[1]: cri-containerd-7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341.scope: Deactivated successfully. Jan 13 20:46:35.880936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341-rootfs.mount: Deactivated successfully. Jan 13 20:46:36.082339 containerd[1502]: time="2025-01-13T20:46:36.082278883Z" level=info msg="shim disconnected" id=7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341 namespace=k8s.io Jan 13 20:46:36.082339 containerd[1502]: time="2025-01-13T20:46:36.082329607Z" level=warning msg="cleaning up after shim disconnected" id=7ac02fbe04f7420e67d5a13f6e5c033a3da612b94e263959856e99276b50a341 namespace=k8s.io Jan 13 20:46:36.082339 containerd[1502]: time="2025-01-13T20:46:36.082337755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:36.082843 kubelet[2604]: E0113 20:46:36.082819 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:36.960110 kubelet[2604]: E0113 20:46:36.960059 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:37.009493 kubelet[2604]: E0113 20:46:37.009463 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:37.010228 containerd[1502]: time="2025-01-13T20:46:37.009991410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:46:38.960269 kubelet[2604]: E0113 20:46:38.960210 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:40.976903 kubelet[2604]: E0113 20:46:40.976816 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:42.395179 containerd[1502]: time="2025-01-13T20:46:42.395139885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:42.395864 containerd[1502]: time="2025-01-13T20:46:42.395834591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:46:42.397154 containerd[1502]: time="2025-01-13T20:46:42.397120835Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:42.399551 containerd[1502]: time="2025-01-13T20:46:42.399523719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:42.400118 containerd[1502]: time="2025-01-13T20:46:42.400092370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.39006715s" Jan 13 20:46:42.400118 containerd[1502]: time="2025-01-13T20:46:42.400114515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:46:42.402002 containerd[1502]: time="2025-01-13T20:46:42.401966663Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:46:42.417633 containerd[1502]: time="2025-01-13T20:46:42.417592523Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0\"" Jan 13 20:46:42.418326 containerd[1502]: time="2025-01-13T20:46:42.417958605Z" level=info msg="StartContainer for \"8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0\"" Jan 13 20:46:42.449018 systemd[1]: Started cri-containerd-8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0.scope - libcontainer container 8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0. Jan 13 20:46:42.477145 containerd[1502]: time="2025-01-13T20:46:42.477023855Z" level=info msg="StartContainer for \"8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0\" returns successfully" Jan 13 20:46:42.960518 kubelet[2604]: E0113 20:46:42.960455 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:43.034427 kubelet[2604]: E0113 20:46:43.034395 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:43.286970 systemd[1]: cri-containerd-8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0.scope: Deactivated successfully. Jan 13 20:46:43.294006 kubelet[2604]: I0113 20:46:43.293975 2604 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:46:43.350092 containerd[1502]: time="2025-01-13T20:46:43.350021842Z" level=info msg="shim disconnected" id=8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0 namespace=k8s.io Jan 13 20:46:43.350092 containerd[1502]: time="2025-01-13T20:46:43.350075180Z" level=warning msg="cleaning up after shim disconnected" id=8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0 namespace=k8s.io Jan 13 20:46:43.350092 containerd[1502]: time="2025-01-13T20:46:43.350083487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:43.351741 systemd[1]: Created slice kubepods-burstable-podc07ee0db_ecb3_404d_8299_14d38d4f24e5.slice - libcontainer container kubepods-burstable-podc07ee0db_ecb3_404d_8299_14d38d4f24e5.slice. Jan 13 20:46:43.361705 systemd[1]: Created slice kubepods-besteffort-poda8175534_edcd_43aa_9f18_0f7c717c2015.slice - libcontainer container kubepods-besteffort-poda8175534_edcd_43aa_9f18_0f7c717c2015.slice. Jan 13 20:46:43.369532 systemd[1]: Created slice kubepods-besteffort-pod6253a823_a0be_41d8_b9d8_038c03511377.slice - libcontainer container kubepods-besteffort-pod6253a823_a0be_41d8_b9d8_038c03511377.slice. Jan 13 20:46:43.376021 systemd[1]: Created slice kubepods-burstable-podc50d7bc1_3e5c_4530_b31c_b018c3931fbb.slice - libcontainer container kubepods-burstable-podc50d7bc1_3e5c_4530_b31c_b018c3931fbb.slice. Jan 13 20:46:43.382179 systemd[1]: Created slice kubepods-besteffort-pod16b2a8f4_c5cc_4088_8589_adcfeced0140.slice - libcontainer container kubepods-besteffort-pod16b2a8f4_c5cc_4088_8589_adcfeced0140.slice. Jan 13 20:46:43.414435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8791bc1d82a78ebc0ca56896c8bbecd6b82a4db5a4339f694462e83b80f4ddf0-rootfs.mount: Deactivated successfully. Jan 13 20:46:43.444512 kubelet[2604]: I0113 20:46:43.444464 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8175534-edcd-43aa-9f18-0f7c717c2015-tigera-ca-bundle\") pod \"calico-kube-controllers-68d84f998-vczjr\" (UID: \"a8175534-edcd-43aa-9f18-0f7c717c2015\") " pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:43.444702 kubelet[2604]: I0113 20:46:43.444517 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16b2a8f4-c5cc-4088-8589-adcfeced0140-calico-apiserver-certs\") pod \"calico-apiserver-5848c99678-rzkmh\" (UID: \"16b2a8f4-c5cc-4088-8589-adcfeced0140\") " pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:43.444702 kubelet[2604]: I0113 20:46:43.444620 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c50d7bc1-3e5c-4530-b31c-b018c3931fbb-config-volume\") pod \"coredns-6f6b679f8f-k7xxm\" (UID: \"c50d7bc1-3e5c-4530-b31c-b018c3931fbb\") " pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:43.444702 kubelet[2604]: I0113 20:46:43.444657 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqqd\" (UniqueName: \"kubernetes.io/projected/c07ee0db-ecb3-404d-8299-14d38d4f24e5-kube-api-access-djqqd\") pod \"coredns-6f6b679f8f-kkgfd\" (UID: \"c07ee0db-ecb3-404d-8299-14d38d4f24e5\") " pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:43.444702 kubelet[2604]: I0113 20:46:43.444693 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7q9p\" (UniqueName: \"kubernetes.io/projected/16b2a8f4-c5cc-4088-8589-adcfeced0140-kube-api-access-z7q9p\") pod \"calico-apiserver-5848c99678-rzkmh\" (UID: \"16b2a8f4-c5cc-4088-8589-adcfeced0140\") " pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:43.444847 kubelet[2604]: I0113 20:46:43.444728 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6253a823-a0be-41d8-b9d8-038c03511377-calico-apiserver-certs\") pod \"calico-apiserver-5848c99678-gl2ks\" (UID: \"6253a823-a0be-41d8-b9d8-038c03511377\") " pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:43.444847 kubelet[2604]: I0113 20:46:43.444748 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7vp\" (UniqueName: \"kubernetes.io/projected/6253a823-a0be-41d8-b9d8-038c03511377-kube-api-access-nd7vp\") pod \"calico-apiserver-5848c99678-gl2ks\" (UID: \"6253a823-a0be-41d8-b9d8-038c03511377\") " pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:43.444847 kubelet[2604]: I0113 20:46:43.444766 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vdn\" (UniqueName: \"kubernetes.io/projected/c50d7bc1-3e5c-4530-b31c-b018c3931fbb-kube-api-access-28vdn\") pod \"coredns-6f6b679f8f-k7xxm\" (UID: \"c50d7bc1-3e5c-4530-b31c-b018c3931fbb\") " pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:43.444847 kubelet[2604]: I0113 20:46:43.444784 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c07ee0db-ecb3-404d-8299-14d38d4f24e5-config-volume\") pod \"coredns-6f6b679f8f-kkgfd\" (UID: \"c07ee0db-ecb3-404d-8299-14d38d4f24e5\") " pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:43.444847 kubelet[2604]: I0113 20:46:43.444804 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jh57\" (UniqueName: \"kubernetes.io/projected/a8175534-edcd-43aa-9f18-0f7c717c2015-kube-api-access-9jh57\") pod \"calico-kube-controllers-68d84f998-vczjr\" (UID: \"a8175534-edcd-43aa-9f18-0f7c717c2015\") " pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:43.658814 kubelet[2604]: E0113 20:46:43.658690 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:43.659951 containerd[1502]: time="2025-01-13T20:46:43.659910103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:43.666732 containerd[1502]: time="2025-01-13T20:46:43.666692019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:43.672639 containerd[1502]: time="2025-01-13T20:46:43.672598067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:43.679260 kubelet[2604]: E0113 20:46:43.679133 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:43.680253 containerd[1502]: time="2025-01-13T20:46:43.680221753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:43.685485 containerd[1502]: time="2025-01-13T20:46:43.685438668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:43.765369 containerd[1502]: time="2025-01-13T20:46:43.765290299Z" level=error msg="Failed to destroy network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.766194 containerd[1502]: time="2025-01-13T20:46:43.766098842Z" level=error msg="encountered an error cleaning up failed sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.766194 containerd[1502]: time="2025-01-13T20:46:43.766154595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.766524 kubelet[2604]: E0113 20:46:43.766450 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.766590 kubelet[2604]: E0113 20:46:43.766522 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:43.766590 kubelet[2604]: E0113 20:46:43.766550 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:43.766641 kubelet[2604]: E0113 20:46:43.766587 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:43.790939 containerd[1502]: time="2025-01-13T20:46:43.789855610Z" level=error msg="Failed to destroy network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.790939 containerd[1502]: time="2025-01-13T20:46:43.790338694Z" level=error msg="encountered an error cleaning up failed sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.790939 containerd[1502]: time="2025-01-13T20:46:43.790400078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.791199 kubelet[2604]: E0113 20:46:43.790660 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.791199 kubelet[2604]: E0113 20:46:43.790726 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:43.791199 kubelet[2604]: E0113 20:46:43.790751 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:43.791328 kubelet[2604]: E0113 20:46:43.790809 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:43.792462 containerd[1502]: time="2025-01-13T20:46:43.792311509Z" level=error msg="Failed to destroy network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.792959 containerd[1502]: time="2025-01-13T20:46:43.792768381Z" level=error msg="encountered an error cleaning up failed sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.792959 containerd[1502]: time="2025-01-13T20:46:43.792817551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.793179 kubelet[2604]: E0113 20:46:43.793123 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.793237 kubelet[2604]: E0113 20:46:43.793185 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:43.793237 kubelet[2604]: E0113 20:46:43.793212 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:43.793312 kubelet[2604]: E0113 20:46:43.793253 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:43.801179 containerd[1502]: time="2025-01-13T20:46:43.801131331Z" level=error msg="Failed to destroy network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.801583 containerd[1502]: time="2025-01-13T20:46:43.801533903Z" level=error msg="encountered an error cleaning up failed sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.801639 containerd[1502]: time="2025-01-13T20:46:43.801610438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.801825 kubelet[2604]: E0113 20:46:43.801788 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.801888 kubelet[2604]: E0113 20:46:43.801837 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:43.801888 kubelet[2604]: E0113 20:46:43.801855 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:43.801946 kubelet[2604]: E0113 20:46:43.801911 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:43.808079 containerd[1502]: time="2025-01-13T20:46:43.808035204Z" level=error msg="Failed to destroy network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.808357 containerd[1502]: time="2025-01-13T20:46:43.808331190Z" level=error msg="encountered an error cleaning up failed sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.808391 containerd[1502]: time="2025-01-13T20:46:43.808372995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.808605 kubelet[2604]: E0113 20:46:43.808560 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:43.808677 kubelet[2604]: E0113 20:46:43.808626 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:43.808677 kubelet[2604]: E0113 20:46:43.808650 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:43.808731 kubelet[2604]: E0113 20:46:43.808700 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:44.037739 kubelet[2604]: E0113 20:46:44.037694 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:44.038482 containerd[1502]: time="2025-01-13T20:46:44.038439325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:46:44.038829 kubelet[2604]: I0113 20:46:44.038805 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838" Jan 13 20:46:44.039370 containerd[1502]: time="2025-01-13T20:46:44.039332204Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:44.039579 containerd[1502]: time="2025-01-13T20:46:44.039551756Z" level=info msg="Ensure that sandbox 619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838 in task-service has been cleanup successfully" Jan 13 20:46:44.039795 containerd[1502]: time="2025-01-13T20:46:44.039770868Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:44.039795 containerd[1502]: time="2025-01-13T20:46:44.039791660Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:44.039911 kubelet[2604]: I0113 20:46:44.039864 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6" Jan 13 20:46:44.040368 containerd[1502]: time="2025-01-13T20:46:44.040330485Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:44.040548 containerd[1502]: time="2025-01-13T20:46:44.040530358Z" level=info msg="Ensure that sandbox 8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6 in task-service has been cleanup successfully" Jan 13 20:46:44.041266 containerd[1502]: time="2025-01-13T20:46:44.041072100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:44.041266 containerd[1502]: time="2025-01-13T20:46:44.041121409Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:44.041266 containerd[1502]: time="2025-01-13T20:46:44.041133895Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:44.041810 containerd[1502]: time="2025-01-13T20:46:44.041600365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:44.041852 kubelet[2604]: I0113 20:46:44.041800 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb" Jan 13 20:46:44.042265 containerd[1502]: time="2025-01-13T20:46:44.042140372Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:44.042315 containerd[1502]: time="2025-01-13T20:46:44.042280385Z" level=info msg="Ensure that sandbox 02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb in task-service has been cleanup successfully" Jan 13 20:46:44.042452 containerd[1502]: time="2025-01-13T20:46:44.042413784Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:44.042452 containerd[1502]: time="2025-01-13T20:46:44.042430517Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:44.042924 kubelet[2604]: E0113 20:46:44.042630 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:44.042978 containerd[1502]: time="2025-01-13T20:46:44.042797327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:44.043147 kubelet[2604]: I0113 20:46:44.043124 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c" Jan 13 20:46:44.043620 containerd[1502]: time="2025-01-13T20:46:44.043581718Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:44.043972 containerd[1502]: time="2025-01-13T20:46:44.043759937Z" level=info msg="Ensure that sandbox f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c in task-service has been cleanup successfully" Jan 13 20:46:44.044515 containerd[1502]: time="2025-01-13T20:46:44.044294614Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:44.044515 containerd[1502]: time="2025-01-13T20:46:44.044310155Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:44.044717 kubelet[2604]: I0113 20:46:44.044381 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0" Jan 13 20:46:44.045122 containerd[1502]: time="2025-01-13T20:46:44.044821075Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:44.045122 containerd[1502]: time="2025-01-13T20:46:44.044902549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:44.045122 containerd[1502]: time="2025-01-13T20:46:44.044974434Z" level=info msg="Ensure that sandbox b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0 in task-service has been cleanup successfully" Jan 13 20:46:44.045267 containerd[1502]: time="2025-01-13T20:46:44.045248346Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:44.045267 containerd[1502]: time="2025-01-13T20:46:44.045265180Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:44.045415 kubelet[2604]: E0113 20:46:44.045386 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:44.045639 containerd[1502]: time="2025-01-13T20:46:44.045595936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:44.165104 containerd[1502]: time="2025-01-13T20:46:44.165023906Z" level=error msg="Failed to destroy network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.165514 containerd[1502]: time="2025-01-13T20:46:44.165448031Z" level=error msg="encountered an error cleaning up failed sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.165514 containerd[1502]: time="2025-01-13T20:46:44.165504425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.165787 kubelet[2604]: E0113 20:46:44.165725 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.165833 kubelet[2604]: E0113 20:46:44.165788 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:44.165833 kubelet[2604]: E0113 20:46:44.165807 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:44.165899 kubelet[2604]: E0113 20:46:44.165844 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:44.177612 containerd[1502]: time="2025-01-13T20:46:44.177544607Z" level=error msg="Failed to destroy network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.178209 containerd[1502]: time="2025-01-13T20:46:44.178184837Z" level=error msg="encountered an error cleaning up failed sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.178341 containerd[1502]: time="2025-01-13T20:46:44.178311451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.178751 kubelet[2604]: E0113 20:46:44.178686 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.178808 containerd[1502]: time="2025-01-13T20:46:44.178700055Z" level=error msg="Failed to destroy network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.178867 kubelet[2604]: E0113 20:46:44.178772 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:44.178867 kubelet[2604]: E0113 20:46:44.178796 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:44.178867 kubelet[2604]: E0113 20:46:44.178863 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:44.179195 containerd[1502]: time="2025-01-13T20:46:44.179156745Z" level=error msg="encountered an error cleaning up failed sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.179237 containerd[1502]: time="2025-01-13T20:46:44.179216846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.179913 kubelet[2604]: E0113 20:46:44.179439 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.179913 kubelet[2604]: E0113 20:46:44.179494 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:44.179913 kubelet[2604]: E0113 20:46:44.179514 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:44.180054 kubelet[2604]: E0113 20:46:44.179550 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:44.180559 containerd[1502]: time="2025-01-13T20:46:44.180514832Z" level=error msg="Failed to destroy network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.181007 containerd[1502]: time="2025-01-13T20:46:44.180818002Z" level=error msg="encountered an error cleaning up failed sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.181007 containerd[1502]: time="2025-01-13T20:46:44.180861901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.181128 kubelet[2604]: E0113 20:46:44.181098 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.181180 kubelet[2604]: E0113 20:46:44.181145 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:44.181180 kubelet[2604]: E0113 20:46:44.181171 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:44.181252 kubelet[2604]: E0113 20:46:44.181223 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:44.185262 containerd[1502]: time="2025-01-13T20:46:44.185228169Z" level=error msg="Failed to destroy network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.185620 containerd[1502]: time="2025-01-13T20:46:44.185565510Z" level=error msg="encountered an error cleaning up failed sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.185620 containerd[1502]: time="2025-01-13T20:46:44.185621904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.185911 kubelet[2604]: E0113 20:46:44.185812 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:44.185911 kubelet[2604]: E0113 20:46:44.185859 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:44.185911 kubelet[2604]: E0113 20:46:44.185890 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:44.186054 kubelet[2604]: E0113 20:46:44.185924 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:44.416224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0-shm.mount: Deactivated successfully. Jan 13 20:46:44.965483 systemd[1]: Created slice kubepods-besteffort-podaab8c9ce_5c63_4682_8391_52de7028ab06.slice - libcontainer container kubepods-besteffort-podaab8c9ce_5c63_4682_8391_52de7028ab06.slice. Jan 13 20:46:44.967687 containerd[1502]: time="2025-01-13T20:46:44.967632646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:45.027013 containerd[1502]: time="2025-01-13T20:46:45.026956041Z" level=error msg="Failed to destroy network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.027409 containerd[1502]: time="2025-01-13T20:46:45.027376727Z" level=error msg="encountered an error cleaning up failed sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.027461 containerd[1502]: time="2025-01-13T20:46:45.027441988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.027700 kubelet[2604]: E0113 20:46:45.027651 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.027762 kubelet[2604]: E0113 20:46:45.027722 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:45.027762 kubelet[2604]: E0113 20:46:45.027744 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:45.027810 kubelet[2604]: E0113 20:46:45.027788 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:45.030019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd-shm.mount: Deactivated successfully. Jan 13 20:46:45.047602 kubelet[2604]: I0113 20:46:45.047563 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105" Jan 13 20:46:45.048232 containerd[1502]: time="2025-01-13T20:46:45.048191645Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:45.048406 containerd[1502]: time="2025-01-13T20:46:45.048387489Z" level=info msg="Ensure that sandbox e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105 in task-service has been cleanup successfully" Jan 13 20:46:45.048782 containerd[1502]: time="2025-01-13T20:46:45.048742673Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:45.048782 containerd[1502]: time="2025-01-13T20:46:45.048759758Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:45.049181 containerd[1502]: time="2025-01-13T20:46:45.049146786Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:45.049284 containerd[1502]: time="2025-01-13T20:46:45.049226356Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:45.049284 containerd[1502]: time="2025-01-13T20:46:45.049236537Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:45.049590 kubelet[2604]: I0113 20:46:45.049558 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658" Jan 13 20:46:45.050295 containerd[1502]: time="2025-01-13T20:46:45.050028990Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:45.050295 containerd[1502]: time="2025-01-13T20:46:45.050088851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:45.050805 containerd[1502]: time="2025-01-13T20:46:45.050774710Z" level=info msg="Ensure that sandbox d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658 in task-service has been cleanup successfully" Jan 13 20:46:45.050994 systemd[1]: run-netns-cni\x2d9e59dd85\x2d2bb5\x2d44df\x2dda2d\x2db6ffc6a91d05.mount: Deactivated successfully. Jan 13 20:46:45.051100 containerd[1502]: time="2025-01-13T20:46:45.051053861Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:45.051100 containerd[1502]: time="2025-01-13T20:46:45.051066286Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:45.051812 containerd[1502]: time="2025-01-13T20:46:45.051579989Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:45.051812 containerd[1502]: time="2025-01-13T20:46:45.051657675Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:45.051812 containerd[1502]: time="2025-01-13T20:46:45.051669319Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:45.052719 kubelet[2604]: I0113 20:46:45.052301 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527" Jan 13 20:46:45.052759 containerd[1502]: time="2025-01-13T20:46:45.052307883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:45.053268 containerd[1502]: time="2025-01-13T20:46:45.053246020Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:45.054076 containerd[1502]: time="2025-01-13T20:46:45.054027761Z" level=info msg="Ensure that sandbox 52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527 in task-service has been cleanup successfully" Jan 13 20:46:45.054555 containerd[1502]: time="2025-01-13T20:46:45.054528609Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:45.054600 containerd[1502]: time="2025-01-13T20:46:45.054551766Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:45.056080 containerd[1502]: time="2025-01-13T20:46:45.056027683Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:45.057336 containerd[1502]: time="2025-01-13T20:46:45.056219339Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:45.057336 containerd[1502]: time="2025-01-13T20:46:45.056239770Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:45.058437 kubelet[2604]: E0113 20:46:45.058420 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:45.059075 containerd[1502]: time="2025-01-13T20:46:45.059033227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:45.059350 kubelet[2604]: I0113 20:46:45.059304 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e" Jan 13 20:46:45.060288 containerd[1502]: time="2025-01-13T20:46:45.059907215Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:46:45.060288 containerd[1502]: time="2025-01-13T20:46:45.060159603Z" level=info msg="Ensure that sandbox d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e in task-service has been cleanup successfully" Jan 13 20:46:45.060496 containerd[1502]: time="2025-01-13T20:46:45.060472251Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:46:45.060496 containerd[1502]: time="2025-01-13T20:46:45.060490288Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:46:45.061504 containerd[1502]: time="2025-01-13T20:46:45.061055995Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:45.061504 containerd[1502]: time="2025-01-13T20:46:45.061243702Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:45.061504 containerd[1502]: time="2025-01-13T20:46:45.061304225Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:45.062040 containerd[1502]: time="2025-01-13T20:46:45.061989312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:45.062341 kubelet[2604]: I0113 20:46:45.062322 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7" Jan 13 20:46:45.062842 containerd[1502]: time="2025-01-13T20:46:45.062817167Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:46:45.063161 containerd[1502]: time="2025-01-13T20:46:45.063017079Z" level=info msg="Ensure that sandbox aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7 in task-service has been cleanup successfully" Jan 13 20:46:45.063296 kubelet[2604]: I0113 20:46:45.063229 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd" Jan 13 20:46:45.063670 containerd[1502]: time="2025-01-13T20:46:45.063638829Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:46:45.063845 containerd[1502]: time="2025-01-13T20:46:45.063820734Z" level=info msg="Ensure that sandbox e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd in task-service has been cleanup successfully" Jan 13 20:46:45.063987 containerd[1502]: time="2025-01-13T20:46:45.063923411Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:46:45.063987 containerd[1502]: time="2025-01-13T20:46:45.063988752Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:46:45.064118 containerd[1502]: time="2025-01-13T20:46:45.064069415Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:46:45.064499 containerd[1502]: time="2025-01-13T20:46:45.064112982Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:46:45.064499 containerd[1502]: time="2025-01-13T20:46:45.064321021Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:45.064499 containerd[1502]: time="2025-01-13T20:46:45.064464129Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:45.064499 containerd[1502]: time="2025-01-13T20:46:45.064490182Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:45.064747 kubelet[2604]: E0113 20:46:45.064640 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:45.065170 containerd[1502]: time="2025-01-13T20:46:45.064783121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:45.065170 containerd[1502]: time="2025-01-13T20:46:45.064903813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:45.164928 containerd[1502]: time="2025-01-13T20:46:45.164868250Z" level=error msg="Failed to destroy network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.166623 containerd[1502]: time="2025-01-13T20:46:45.166575542Z" level=error msg="encountered an error cleaning up failed sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.167129 containerd[1502]: time="2025-01-13T20:46:45.167096981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.167515 kubelet[2604]: E0113 20:46:45.167473 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.167575 kubelet[2604]: E0113 20:46:45.167541 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:45.167575 kubelet[2604]: E0113 20:46:45.167565 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:45.167647 kubelet[2604]: E0113 20:46:45.167607 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:45.180367 containerd[1502]: time="2025-01-13T20:46:45.180225593Z" level=error msg="Failed to destroy network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.180746 containerd[1502]: time="2025-01-13T20:46:45.180723995Z" level=error msg="encountered an error cleaning up failed sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.181439 containerd[1502]: time="2025-01-13T20:46:45.181398230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.182094 kubelet[2604]: E0113 20:46:45.182047 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.182201 kubelet[2604]: E0113 20:46:45.182115 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:45.182201 kubelet[2604]: E0113 20:46:45.182139 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:45.182201 kubelet[2604]: E0113 20:46:45.182190 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:45.195897 containerd[1502]: time="2025-01-13T20:46:45.195786025Z" level=error msg="Failed to destroy network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.196609 containerd[1502]: time="2025-01-13T20:46:45.196550632Z" level=error msg="encountered an error cleaning up failed sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.196720 containerd[1502]: time="2025-01-13T20:46:45.196693329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.197124 kubelet[2604]: E0113 20:46:45.197060 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.197180 kubelet[2604]: E0113 20:46:45.197123 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:45.197180 kubelet[2604]: E0113 20:46:45.197152 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:45.197245 kubelet[2604]: E0113 20:46:45.197191 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:45.198500 containerd[1502]: time="2025-01-13T20:46:45.198385622Z" level=error msg="Failed to destroy network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.199138 containerd[1502]: time="2025-01-13T20:46:45.199097063Z" level=error msg="encountered an error cleaning up failed sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.199244 containerd[1502]: time="2025-01-13T20:46:45.199225310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.199726 kubelet[2604]: E0113 20:46:45.199537 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.199726 kubelet[2604]: E0113 20:46:45.199598 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:45.199726 kubelet[2604]: E0113 20:46:45.199615 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:45.199848 kubelet[2604]: E0113 20:46:45.199664 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:45.204037 containerd[1502]: time="2025-01-13T20:46:45.203996345Z" level=error msg="Failed to destroy network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.204439 containerd[1502]: time="2025-01-13T20:46:45.204406991Z" level=error msg="encountered an error cleaning up failed sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.204486 containerd[1502]: time="2025-01-13T20:46:45.204464677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.204666 kubelet[2604]: E0113 20:46:45.204645 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.204956 kubelet[2604]: E0113 20:46:45.204734 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:45.204956 kubelet[2604]: E0113 20:46:45.204754 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:45.204956 kubelet[2604]: E0113 20:46:45.204786 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:45.211132 containerd[1502]: time="2025-01-13T20:46:45.211089589Z" level=error msg="Failed to destroy network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.211502 containerd[1502]: time="2025-01-13T20:46:45.211470776Z" level=error msg="encountered an error cleaning up failed sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.211563 containerd[1502]: time="2025-01-13T20:46:45.211537530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.211783 kubelet[2604]: E0113 20:46:45.211729 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:45.211783 kubelet[2604]: E0113 20:46:45.211775 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:45.211783 kubelet[2604]: E0113 20:46:45.211789 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:45.211977 kubelet[2604]: E0113 20:46:45.211823 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:45.429472 systemd[1]: run-netns-cni\x2def17de9b\x2da791\x2d6d24\x2d666d\x2dde041d45d13e.mount: Deactivated successfully. Jan 13 20:46:45.430191 systemd[1]: run-netns-cni\x2d4f337f2e\x2dabcd\x2d93c8\x2dc45c\x2da5d2f585810b.mount: Deactivated successfully. Jan 13 20:46:45.430276 systemd[1]: run-netns-cni\x2dcf6856cc\x2d6457\x2dbd56\x2d40a8\x2d9253524f685c.mount: Deactivated successfully. Jan 13 20:46:45.430349 systemd[1]: run-netns-cni\x2d76e3deec\x2d4782\x2d7126\x2d3a55\x2def7b773c11bb.mount: Deactivated successfully. Jan 13 20:46:45.430424 systemd[1]: run-netns-cni\x2d10db19d7\x2d397f\x2d82b0\x2d0ad1\x2dccf1f6da4e2e.mount: Deactivated successfully. Jan 13 20:46:46.067246 kubelet[2604]: I0113 20:46:46.067208 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385" Jan 13 20:46:46.067758 containerd[1502]: time="2025-01-13T20:46:46.067717161Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:46:46.068065 containerd[1502]: time="2025-01-13T20:46:46.067953706Z" level=info msg="Ensure that sandbox d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385 in task-service has been cleanup successfully" Jan 13 20:46:46.069356 containerd[1502]: time="2025-01-13T20:46:46.068636946Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:46:46.069356 containerd[1502]: time="2025-01-13T20:46:46.068659121Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:46:46.069558 kubelet[2604]: I0113 20:46:46.069087 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b" Jan 13 20:46:46.069597 containerd[1502]: time="2025-01-13T20:46:46.069502604Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:46.069747 containerd[1502]: time="2025-01-13T20:46:46.069626132Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:46.069747 containerd[1502]: time="2025-01-13T20:46:46.069646673Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:46.069747 containerd[1502]: time="2025-01-13T20:46:46.069690231Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:46:46.069910 containerd[1502]: time="2025-01-13T20:46:46.069889441Z" level=info msg="Ensure that sandbox 445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b in task-service has been cleanup successfully" Jan 13 20:46:46.070249 containerd[1502]: time="2025-01-13T20:46:46.070219954Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:46:46.070416 containerd[1502]: time="2025-01-13T20:46:46.070298682Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:46:46.070416 containerd[1502]: time="2025-01-13T20:46:46.070281918Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:46.070637 containerd[1502]: time="2025-01-13T20:46:46.070589977Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:46.070637 containerd[1502]: time="2025-01-13T20:46:46.070605839Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:46.070764 containerd[1502]: time="2025-01-13T20:46:46.070732092Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:46.070842 containerd[1502]: time="2025-01-13T20:46:46.070823045Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:46.070842 containerd[1502]: time="2025-01-13T20:46:46.070838215Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:46.071289 containerd[1502]: time="2025-01-13T20:46:46.071210673Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:46.071335 containerd[1502]: time="2025-01-13T20:46:46.071292818Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:46.071335 containerd[1502]: time="2025-01-13T20:46:46.071305753Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:46.071737 kubelet[2604]: I0113 20:46:46.071474 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228" Jan 13 20:46:46.071786 containerd[1502]: time="2025-01-13T20:46:46.071581126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:46.071940 containerd[1502]: time="2025-01-13T20:46:46.071797310Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:46:46.072212 containerd[1502]: time="2025-01-13T20:46:46.072156992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:46.072942 containerd[1502]: time="2025-01-13T20:46:46.072711034Z" level=info msg="Ensure that sandbox c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228 in task-service has been cleanup successfully" Jan 13 20:46:46.072942 containerd[1502]: time="2025-01-13T20:46:46.072905434Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:46:46.072942 containerd[1502]: time="2025-01-13T20:46:46.072916476Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:46:46.073321 containerd[1502]: time="2025-01-13T20:46:46.073234425Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:46.073321 containerd[1502]: time="2025-01-13T20:46:46.073307140Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:46.073321 containerd[1502]: time="2025-01-13T20:46:46.073316900Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:46.073738 containerd[1502]: time="2025-01-13T20:46:46.073567383Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:46.074073 containerd[1502]: time="2025-01-13T20:46:46.073956373Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:46.074073 containerd[1502]: time="2025-01-13T20:46:46.073985091Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:46.074254 kubelet[2604]: I0113 20:46:46.074187 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7" Jan 13 20:46:46.074270 systemd[1]: run-netns-cni\x2dadfd8a67\x2d0a63\x2dd6b8\x2de992\x2da6bfeefb8606.mount: Deactivated successfully. Jan 13 20:46:46.075028 kubelet[2604]: E0113 20:46:46.074613 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:46.077287 containerd[1502]: time="2025-01-13T20:46:46.077228952Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:46:46.077508 containerd[1502]: time="2025-01-13T20:46:46.077485717Z" level=info msg="Ensure that sandbox 83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7 in task-service has been cleanup successfully" Jan 13 20:46:46.077924 containerd[1502]: time="2025-01-13T20:46:46.077901090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:46.078270 containerd[1502]: time="2025-01-13T20:46:46.077908866Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:46:46.078270 containerd[1502]: time="2025-01-13T20:46:46.078192937Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:46:46.082801 containerd[1502]: time="2025-01-13T20:46:46.080515668Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:46:46.080666 systemd[1]: run-netns-cni\x2d889390fb\x2dd031\x2d2d21\x2db98a\x2d8532a7ffd645.mount: Deactivated successfully. Jan 13 20:46:46.080839 systemd[1]: run-netns-cni\x2d3a48328d\x2db595\x2d22fd\x2dc232\x2d695556d64819.mount: Deactivated successfully. Jan 13 20:46:46.083337 containerd[1502]: time="2025-01-13T20:46:46.083090366Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:46:46.083826 containerd[1502]: time="2025-01-13T20:46:46.083104274Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:46:46.084407 containerd[1502]: time="2025-01-13T20:46:46.084274503Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:46.084407 containerd[1502]: time="2025-01-13T20:46:46.084353893Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:46.084407 containerd[1502]: time="2025-01-13T20:46:46.084363903Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:46.085532 containerd[1502]: time="2025-01-13T20:46:46.085494120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:46.085595 kubelet[2604]: I0113 20:46:46.085541 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846" Jan 13 20:46:46.086727 systemd[1]: run-netns-cni\x2de84c0e35\x2d5050\x2d91bf\x2db883\x2d4e072c612f1b.mount: Deactivated successfully. Jan 13 20:46:46.088203 containerd[1502]: time="2025-01-13T20:46:46.087626181Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:46:46.088203 containerd[1502]: time="2025-01-13T20:46:46.088028127Z" level=info msg="Ensure that sandbox 6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846 in task-service has been cleanup successfully" Jan 13 20:46:46.088661 containerd[1502]: time="2025-01-13T20:46:46.088484443Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:46:46.088661 containerd[1502]: time="2025-01-13T20:46:46.088506076Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:46:46.088752 kubelet[2604]: I0113 20:46:46.088621 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1" Jan 13 20:46:46.090348 containerd[1502]: time="2025-01-13T20:46:46.090290157Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:46:46.090838 containerd[1502]: time="2025-01-13T20:46:46.090304486Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:46:46.090838 containerd[1502]: time="2025-01-13T20:46:46.090491401Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:46:46.090838 containerd[1502]: time="2025-01-13T20:46:46.090587114Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:46:46.090838 containerd[1502]: time="2025-01-13T20:46:46.090668747Z" level=info msg="Ensure that sandbox d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1 in task-service has been cleanup successfully" Jan 13 20:46:46.090964 systemd[1]: run-netns-cni\x2ddea5bbd4\x2dc37b\x2de2e4\x2d49f4\x2d30ae18a22856.mount: Deactivated successfully. Jan 13 20:46:46.091179 containerd[1502]: time="2025-01-13T20:46:46.091156225Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:46:46.091313 containerd[1502]: time="2025-01-13T20:46:46.091250775Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:46:46.091502 containerd[1502]: time="2025-01-13T20:46:46.091477219Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:46.091801 containerd[1502]: time="2025-01-13T20:46:46.091720277Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:46.091801 containerd[1502]: time="2025-01-13T20:46:46.091739175Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:46.092242 kubelet[2604]: E0113 20:46:46.092207 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:46.092491 containerd[1502]: time="2025-01-13T20:46:46.092462998Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:46:46.092702 containerd[1502]: time="2025-01-13T20:46:46.092669913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:46.092786 containerd[1502]: time="2025-01-13T20:46:46.092702439Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:46:46.092786 containerd[1502]: time="2025-01-13T20:46:46.092718491Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:46:46.093211 containerd[1502]: time="2025-01-13T20:46:46.093185458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:46.192662 containerd[1502]: time="2025-01-13T20:46:46.192612739Z" level=error msg="Failed to destroy network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.193090 containerd[1502]: time="2025-01-13T20:46:46.193065478Z" level=error msg="encountered an error cleaning up failed sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.193144 containerd[1502]: time="2025-01-13T20:46:46.193120509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.193388 kubelet[2604]: E0113 20:46:46.193349 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.193473 kubelet[2604]: E0113 20:46:46.193412 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:46.193473 kubelet[2604]: E0113 20:46:46.193434 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:46.193527 kubelet[2604]: E0113 20:46:46.193477 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:46.310946 containerd[1502]: time="2025-01-13T20:46:46.310837159Z" level=error msg="Failed to destroy network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.312413 containerd[1502]: time="2025-01-13T20:46:46.312358994Z" level=error msg="encountered an error cleaning up failed sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.312563 containerd[1502]: time="2025-01-13T20:46:46.312449936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.312747 kubelet[2604]: E0113 20:46:46.312694 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.312815 kubelet[2604]: E0113 20:46:46.312793 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:46.312862 kubelet[2604]: E0113 20:46:46.312816 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:46.312947 kubelet[2604]: E0113 20:46:46.312912 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:46.417928 systemd[1]: run-netns-cni\x2d85207332\x2d2e98\x2dbe54\x2d6c9d\x2d3a4a5185d5d6.mount: Deactivated successfully. Jan 13 20:46:46.879283 containerd[1502]: time="2025-01-13T20:46:46.879223353Z" level=error msg="Failed to destroy network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.879781 containerd[1502]: time="2025-01-13T20:46:46.879734328Z" level=error msg="encountered an error cleaning up failed sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.879849 containerd[1502]: time="2025-01-13T20:46:46.879823747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.880177 kubelet[2604]: E0113 20:46:46.880133 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:46.880255 kubelet[2604]: E0113 20:46:46.880205 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:46.880255 kubelet[2604]: E0113 20:46:46.880232 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:46.880332 kubelet[2604]: E0113 20:46:46.880289 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:46.881652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9-shm.mount: Deactivated successfully. Jan 13 20:46:47.092822 kubelet[2604]: I0113 20:46:47.092179 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9" Jan 13 20:46:47.093284 containerd[1502]: time="2025-01-13T20:46:47.092674991Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:46:47.093284 containerd[1502]: time="2025-01-13T20:46:47.092941965Z" level=info msg="Ensure that sandbox 2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9 in task-service has been cleanup successfully" Jan 13 20:46:47.094137 kubelet[2604]: I0113 20:46:47.094093 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621" Jan 13 20:46:47.094586 containerd[1502]: time="2025-01-13T20:46:47.094545017Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:46:47.094957 containerd[1502]: time="2025-01-13T20:46:47.094781531Z" level=info msg="Ensure that sandbox 6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621 in task-service has been cleanup successfully" Jan 13 20:46:47.095223 containerd[1502]: time="2025-01-13T20:46:47.095197384Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:46:47.095394 containerd[1502]: time="2025-01-13T20:46:47.095320531Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:46:47.095442 containerd[1502]: time="2025-01-13T20:46:47.095278647Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:46:47.095442 containerd[1502]: time="2025-01-13T20:46:47.095408828Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:46:47.095776 containerd[1502]: time="2025-01-13T20:46:47.095728348Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:46:47.095822 containerd[1502]: time="2025-01-13T20:46:47.095744701Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:46:47.095922 containerd[1502]: time="2025-01-13T20:46:47.095838409Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:46:47.095922 containerd[1502]: time="2025-01-13T20:46:47.095901335Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:46:47.096041 containerd[1502]: time="2025-01-13T20:46:47.095946986Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:46:47.096041 containerd[1502]: time="2025-01-13T20:46:47.096039662Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096212668Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096228460Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096286166Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096295054Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096320685Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:47.096378 containerd[1502]: time="2025-01-13T20:46:47.096331276Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:47.096548 kubelet[2604]: I0113 20:46:47.096433 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188" Jan 13 20:46:47.096889 containerd[1502]: time="2025-01-13T20:46:47.096799054Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:46:47.096889 containerd[1502]: time="2025-01-13T20:46:47.096843302Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:47.096994 containerd[1502]: time="2025-01-13T20:46:47.096941188Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:47.096994 containerd[1502]: time="2025-01-13T20:46:47.096989655Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:47.097090 containerd[1502]: time="2025-01-13T20:46:47.097011299Z" level=info msg="Ensure that sandbox 7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188 in task-service has been cleanup successfully" Jan 13 20:46:47.097090 containerd[1502]: time="2025-01-13T20:46:47.096812660Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:47.097185 containerd[1502]: time="2025-01-13T20:46:47.097090377Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:47.097185 containerd[1502]: time="2025-01-13T20:46:47.097100316Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:47.097230 containerd[1502]: time="2025-01-13T20:46:47.097219806Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:46:47.097260 containerd[1502]: time="2025-01-13T20:46:47.097232231Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:46:47.097437 kubelet[2604]: E0113 20:46:47.097412 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:47.097497 containerd[1502]: time="2025-01-13T20:46:47.097453424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:47.097626 systemd[1]: run-netns-cni\x2d4d71c4de\x2d7f89\x2d13c5\x2dea5d\x2d673d0dc59b32.mount: Deactivated successfully. Jan 13 20:46:47.097738 containerd[1502]: time="2025-01-13T20:46:47.097621922Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:46:47.097738 containerd[1502]: time="2025-01-13T20:46:47.097686261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:47.097738 containerd[1502]: time="2025-01-13T20:46:47.097695479Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:46:47.097738 containerd[1502]: time="2025-01-13T20:46:47.097707022Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:46:47.097741 systemd[1]: run-netns-cni\x2db2301582\x2d475b\x2dc2b6\x2d4207\x2d220cb534ebd8.mount: Deactivated successfully. Jan 13 20:46:47.098138 containerd[1502]: time="2025-01-13T20:46:47.097981162Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:47.098138 containerd[1502]: time="2025-01-13T20:46:47.098073858Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:47.098138 containerd[1502]: time="2025-01-13T20:46:47.098082615Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:47.098422 containerd[1502]: time="2025-01-13T20:46:47.098372916Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:47.098482 containerd[1502]: time="2025-01-13T20:46:47.098453619Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:47.098482 containerd[1502]: time="2025-01-13T20:46:47.098464259Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:47.098960 containerd[1502]: time="2025-01-13T20:46:47.098929341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:47.172619 containerd[1502]: time="2025-01-13T20:46:47.172445469Z" level=error msg="Failed to destroy network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.174055 containerd[1502]: time="2025-01-13T20:46:47.173834533Z" level=error msg="encountered an error cleaning up failed sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.174055 containerd[1502]: time="2025-01-13T20:46:47.174000695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.174454 kubelet[2604]: E0113 20:46:47.174420 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.174506 kubelet[2604]: E0113 20:46:47.174477 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:47.174506 kubelet[2604]: E0113 20:46:47.174496 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:47.174554 kubelet[2604]: E0113 20:46:47.174526 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:47.208766 containerd[1502]: time="2025-01-13T20:46:47.208686748Z" level=error msg="Failed to destroy network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.278510 containerd[1502]: time="2025-01-13T20:46:47.209127020Z" level=error msg="encountered an error cleaning up failed sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.278510 containerd[1502]: time="2025-01-13T20:46:47.209179556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.278770 kubelet[2604]: E0113 20:46:47.209386 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.278770 kubelet[2604]: E0113 20:46:47.209442 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:47.278770 kubelet[2604]: E0113 20:46:47.209466 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:47.278966 kubelet[2604]: E0113 20:46:47.209508 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:47.344589 containerd[1502]: time="2025-01-13T20:46:47.344544697Z" level=error msg="Failed to destroy network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.345748 containerd[1502]: time="2025-01-13T20:46:47.345560362Z" level=error msg="encountered an error cleaning up failed sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.345748 containerd[1502]: time="2025-01-13T20:46:47.345645923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.346242 kubelet[2604]: E0113 20:46:47.346044 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.346242 kubelet[2604]: E0113 20:46:47.346105 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:47.346242 kubelet[2604]: E0113 20:46:47.346137 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:47.346372 kubelet[2604]: E0113 20:46:47.346197 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:47.400113 containerd[1502]: time="2025-01-13T20:46:47.400059229Z" level=error msg="Failed to destroy network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.400610 containerd[1502]: time="2025-01-13T20:46:47.400483820Z" level=error msg="encountered an error cleaning up failed sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.400610 containerd[1502]: time="2025-01-13T20:46:47.400556155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.401116 kubelet[2604]: E0113 20:46:47.401075 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.401345 kubelet[2604]: E0113 20:46:47.401274 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:47.401345 kubelet[2604]: E0113 20:46:47.401311 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:47.401800 kubelet[2604]: E0113 20:46:47.401473 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:47.414467 containerd[1502]: time="2025-01-13T20:46:47.414403047Z" level=error msg="Failed to destroy network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.415150 containerd[1502]: time="2025-01-13T20:46:47.415001717Z" level=error msg="encountered an error cleaning up failed sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.415150 containerd[1502]: time="2025-01-13T20:46:47.415058460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.415376 kubelet[2604]: E0113 20:46:47.415337 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.415429 kubelet[2604]: E0113 20:46:47.415398 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:47.415429 kubelet[2604]: E0113 20:46:47.415416 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:47.415487 kubelet[2604]: E0113 20:46:47.415463 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:47.416265 containerd[1502]: time="2025-01-13T20:46:47.416224085Z" level=error msg="Failed to destroy network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.417057 containerd[1502]: time="2025-01-13T20:46:47.417031974Z" level=error msg="encountered an error cleaning up failed sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.417121 containerd[1502]: time="2025-01-13T20:46:47.417083377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.417865 kubelet[2604]: E0113 20:46:47.417834 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:47.417865 kubelet[2604]: E0113 20:46:47.417889 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:47.418018 kubelet[2604]: E0113 20:46:47.417906 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:47.418018 kubelet[2604]: E0113 20:46:47.417943 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:47.419165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916-shm.mount: Deactivated successfully. Jan 13 20:46:47.419272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe-shm.mount: Deactivated successfully. Jan 13 20:46:47.419344 systemd[1]: run-netns-cni\x2df8c5515f\x2d1407\x2d8ed2\x2d732f\x2d782fc6603850.mount: Deactivated successfully. Jan 13 20:46:47.422897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586-shm.mount: Deactivated successfully. Jan 13 20:46:47.423005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397-shm.mount: Deactivated successfully. Jan 13 20:46:48.100781 kubelet[2604]: I0113 20:46:48.100745 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397" Jan 13 20:46:48.102091 containerd[1502]: time="2025-01-13T20:46:48.101466966Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:46:48.102091 containerd[1502]: time="2025-01-13T20:46:48.101666675Z" level=info msg="Ensure that sandbox 10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397 in task-service has been cleanup successfully" Jan 13 20:46:48.104693 containerd[1502]: time="2025-01-13T20:46:48.103793969Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:46:48.104693 containerd[1502]: time="2025-01-13T20:46:48.104005473Z" level=info msg="Ensure that sandbox bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586 in task-service has been cleanup successfully" Jan 13 20:46:48.104769 kubelet[2604]: I0113 20:46:48.102762 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586" Jan 13 20:46:48.104801 containerd[1502]: time="2025-01-13T20:46:48.104692677Z" level=info msg="TearDown network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" successfully" Jan 13 20:46:48.104801 containerd[1502]: time="2025-01-13T20:46:48.104710232Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" returns successfully" Jan 13 20:46:48.105121 containerd[1502]: time="2025-01-13T20:46:48.105049621Z" level=info msg="TearDown network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" successfully" Jan 13 20:46:48.105121 containerd[1502]: time="2025-01-13T20:46:48.105068549Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" returns successfully" Jan 13 20:46:48.105121 containerd[1502]: time="2025-01-13T20:46:48.105079751Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:46:48.105197 containerd[1502]: time="2025-01-13T20:46:48.105167196Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:46:48.105197 containerd[1502]: time="2025-01-13T20:46:48.105181795Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:46:48.106114 containerd[1502]: time="2025-01-13T20:46:48.105962597Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:46:48.106114 containerd[1502]: time="2025-01-13T20:46:48.106036495Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:46:48.106114 containerd[1502]: time="2025-01-13T20:46:48.106054491Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:46:48.106114 containerd[1502]: time="2025-01-13T20:46:48.106065593Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:46:48.106221 containerd[1502]: time="2025-01-13T20:46:48.106129932Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:46:48.106221 containerd[1502]: time="2025-01-13T20:46:48.106143980Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:46:48.106276 containerd[1502]: time="2025-01-13T20:46:48.106220913Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:48.106358 containerd[1502]: time="2025-01-13T20:46:48.106323559Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:48.106358 containerd[1502]: time="2025-01-13T20:46:48.106348769Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:48.106461 systemd[1]: run-netns-cni\x2dbd369525\x2d37c8\x2dfbaa\x2d5e72\x2d738ae64b2126.mount: Deactivated successfully. Jan 13 20:46:48.106785 containerd[1502]: time="2025-01-13T20:46:48.106516054Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:46:48.106785 containerd[1502]: time="2025-01-13T20:46:48.106593329Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:46:48.106785 containerd[1502]: time="2025-01-13T20:46:48.106605934Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:46:48.108054 kubelet[2604]: I0113 20:46:48.107114 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.107847016Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.107961105Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.107975363Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.107981154Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.108087697Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:48.108102 containerd[1502]: time="2025-01-13T20:46:48.108103419Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:48.108806 containerd[1502]: time="2025-01-13T20:46:48.108512948Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:48.108806 containerd[1502]: time="2025-01-13T20:46:48.108612387Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:48.108806 containerd[1502]: time="2025-01-13T20:46:48.108648359Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:48.108999 kubelet[2604]: E0113 20:46:48.108981 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:48.109387 containerd[1502]: time="2025-01-13T20:46:48.109232407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:48.109387 containerd[1502]: time="2025-01-13T20:46:48.109334592Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:46:48.109387 containerd[1502]: time="2025-01-13T20:46:48.109378599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:48.109608 containerd[1502]: time="2025-01-13T20:46:48.109516545Z" level=info msg="Ensure that sandbox 073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe in task-service has been cleanup successfully" Jan 13 20:46:48.109996 containerd[1502]: time="2025-01-13T20:46:48.109961645Z" level=info msg="TearDown network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" successfully" Jan 13 20:46:48.110057 containerd[1502]: time="2025-01-13T20:46:48.109999622Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" returns successfully" Jan 13 20:46:48.110500 containerd[1502]: time="2025-01-13T20:46:48.110469582Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:46:48.110578 containerd[1502]: time="2025-01-13T20:46:48.110553529Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:46:48.110578 containerd[1502]: time="2025-01-13T20:46:48.110571095Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:46:48.111156 kubelet[2604]: I0113 20:46:48.110321 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916" Jan 13 20:46:48.111418 containerd[1502]: time="2025-01-13T20:46:48.111380073Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:46:48.111977 containerd[1502]: time="2025-01-13T20:46:48.111489592Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:46:48.111977 containerd[1502]: time="2025-01-13T20:46:48.111511555Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:46:48.111977 containerd[1502]: time="2025-01-13T20:46:48.111577037Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:46:48.111977 containerd[1502]: time="2025-01-13T20:46:48.111730473Z" level=info msg="Ensure that sandbox c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916 in task-service has been cleanup successfully" Jan 13 20:46:48.113312 containerd[1502]: time="2025-01-13T20:46:48.113266946Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:48.113360 containerd[1502]: time="2025-01-13T20:46:48.113314110Z" level=info msg="TearDown network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" successfully" Jan 13 20:46:48.113360 containerd[1502]: time="2025-01-13T20:46:48.113342908Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" returns successfully" Jan 13 20:46:48.113335 systemd[1]: run-netns-cni\x2dfd0d8212\x2d2c38\x2ddac0\x2de23c\x2d41edbb9ca4cf.mount: Deactivated successfully. Jan 13 20:46:48.113475 containerd[1502]: time="2025-01-13T20:46:48.113380474Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:48.113471 systemd[1]: run-netns-cni\x2d2b84b26e\x2d58ee\x2d9207\x2dd349\x2df29dda12b41d.mount: Deactivated successfully. Jan 13 20:46:48.114895 containerd[1502]: time="2025-01-13T20:46:48.113715414Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:48.115070 containerd[1502]: time="2025-01-13T20:46:48.114990283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:48.115211 containerd[1502]: time="2025-01-13T20:46:48.114998379Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:46:48.115308 containerd[1502]: time="2025-01-13T20:46:48.115282969Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:46:48.115308 containerd[1502]: time="2025-01-13T20:46:48.115299822Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:46:48.116585 containerd[1502]: time="2025-01-13T20:46:48.116559201Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:46:48.116681 containerd[1502]: time="2025-01-13T20:46:48.116658650Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:46:48.116709 containerd[1502]: time="2025-01-13T20:46:48.116678229Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:46:48.117593 containerd[1502]: time="2025-01-13T20:46:48.117566265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:48.118597 systemd[1]: run-netns-cni\x2daa080483\x2dba8c\x2d9d4b\x2dddc3\x2da032992f8ed9.mount: Deactivated successfully. Jan 13 20:46:48.146679 kubelet[2604]: I0113 20:46:48.146634 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b" Jan 13 20:46:48.147818 containerd[1502]: time="2025-01-13T20:46:48.147633182Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:46:48.150800 kubelet[2604]: I0113 20:46:48.150767 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11" Jan 13 20:46:48.151161 containerd[1502]: time="2025-01-13T20:46:48.151126389Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:46:48.151476 containerd[1502]: time="2025-01-13T20:46:48.151367090Z" level=info msg="Ensure that sandbox a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11 in task-service has been cleanup successfully" Jan 13 20:46:48.151706 containerd[1502]: time="2025-01-13T20:46:48.151679995Z" level=info msg="TearDown network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" successfully" Jan 13 20:46:48.151706 containerd[1502]: time="2025-01-13T20:46:48.151700557Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" returns successfully" Jan 13 20:46:48.152062 containerd[1502]: time="2025-01-13T20:46:48.151921138Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:46:48.152108 containerd[1502]: time="2025-01-13T20:46:48.152044705Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:46:48.152108 containerd[1502]: time="2025-01-13T20:46:48.152093282Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:46:48.152731 containerd[1502]: time="2025-01-13T20:46:48.152665397Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:46:48.153360 containerd[1502]: time="2025-01-13T20:46:48.153273262Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:46:48.153360 containerd[1502]: time="2025-01-13T20:46:48.153294445Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:46:48.153711 containerd[1502]: time="2025-01-13T20:46:48.153540456Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:48.153711 containerd[1502]: time="2025-01-13T20:46:48.153628412Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:48.153711 containerd[1502]: time="2025-01-13T20:46:48.153639876Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:48.154402 kubelet[2604]: E0113 20:46:48.154381 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:48.155130 containerd[1502]: time="2025-01-13T20:46:48.155047591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:48.163612 containerd[1502]: time="2025-01-13T20:46:48.163580859Z" level=info msg="Ensure that sandbox 049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b in task-service has been cleanup successfully" Jan 13 20:46:48.163835 containerd[1502]: time="2025-01-13T20:46:48.163778944Z" level=info msg="TearDown network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" successfully" Jan 13 20:46:48.163835 containerd[1502]: time="2025-01-13T20:46:48.163795287Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" returns successfully" Jan 13 20:46:48.164681 containerd[1502]: time="2025-01-13T20:46:48.164656569Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:46:48.164750 containerd[1502]: time="2025-01-13T20:46:48.164730598Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:46:48.164750 containerd[1502]: time="2025-01-13T20:46:48.164740448Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:46:48.165446 containerd[1502]: time="2025-01-13T20:46:48.165275197Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:46:48.165519 containerd[1502]: time="2025-01-13T20:46:48.165504697Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:46:48.165571 containerd[1502]: time="2025-01-13T20:46:48.165559707Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:46:48.165996 containerd[1502]: time="2025-01-13T20:46:48.165977923Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:48.166167 containerd[1502]: time="2025-01-13T20:46:48.166140087Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:48.166167 containerd[1502]: time="2025-01-13T20:46:48.166156479Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:48.166570 containerd[1502]: time="2025-01-13T20:46:48.166545848Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:48.166641 containerd[1502]: time="2025-01-13T20:46:48.166623374Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:48.166641 containerd[1502]: time="2025-01-13T20:46:48.166634415Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:48.167090 containerd[1502]: time="2025-01-13T20:46:48.167067973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:48.301165 containerd[1502]: time="2025-01-13T20:46:48.301009728Z" level=error msg="Failed to destroy network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.301934 containerd[1502]: time="2025-01-13T20:46:48.301520459Z" level=error msg="encountered an error cleaning up failed sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.301934 containerd[1502]: time="2025-01-13T20:46:48.301572232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.302076 kubelet[2604]: E0113 20:46:48.301767 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.302076 kubelet[2604]: E0113 20:46:48.301821 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:48.302076 kubelet[2604]: E0113 20:46:48.301839 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:48.302535 kubelet[2604]: E0113 20:46:48.302209 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:48.302774 containerd[1502]: time="2025-01-13T20:46:48.302647523Z" level=error msg="Failed to destroy network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.303292 containerd[1502]: time="2025-01-13T20:46:48.303271100Z" level=error msg="encountered an error cleaning up failed sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.303377 containerd[1502]: time="2025-01-13T20:46:48.303361912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.303702 kubelet[2604]: E0113 20:46:48.303681 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.303814 kubelet[2604]: E0113 20:46:48.303798 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:48.304023 kubelet[2604]: E0113 20:46:48.303938 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:48.304023 kubelet[2604]: E0113 20:46:48.303983 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:48.317592 containerd[1502]: time="2025-01-13T20:46:48.316776673Z" level=error msg="Failed to destroy network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.317592 containerd[1502]: time="2025-01-13T20:46:48.317455230Z" level=error msg="encountered an error cleaning up failed sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.317592 containerd[1502]: time="2025-01-13T20:46:48.317531562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.318024 kubelet[2604]: E0113 20:46:48.317984 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.318149 kubelet[2604]: E0113 20:46:48.318129 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:48.318243 kubelet[2604]: E0113 20:46:48.318208 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:48.318340 kubelet[2604]: E0113 20:46:48.318302 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:48.328564 containerd[1502]: time="2025-01-13T20:46:48.328521312Z" level=error msg="Failed to destroy network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.329174 containerd[1502]: time="2025-01-13T20:46:48.329153688Z" level=error msg="encountered an error cleaning up failed sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.329310 containerd[1502]: time="2025-01-13T20:46:48.329291283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.329543 kubelet[2604]: E0113 20:46:48.329516 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.329677 kubelet[2604]: E0113 20:46:48.329640 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:48.329758 kubelet[2604]: E0113 20:46:48.329730 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:48.329858 kubelet[2604]: E0113 20:46:48.329836 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:48.343896 containerd[1502]: time="2025-01-13T20:46:48.343844741Z" level=error msg="Failed to destroy network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.344486 containerd[1502]: time="2025-01-13T20:46:48.344463268Z" level=error msg="encountered an error cleaning up failed sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.344620 containerd[1502]: time="2025-01-13T20:46:48.344602316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.344750 containerd[1502]: time="2025-01-13T20:46:48.344666895Z" level=error msg="Failed to destroy network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.345343 containerd[1502]: time="2025-01-13T20:46:48.345317466Z" level=error msg="encountered an error cleaning up failed sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.345406 containerd[1502]: time="2025-01-13T20:46:48.345354491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.345551 kubelet[2604]: E0113 20:46:48.345519 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.345630 kubelet[2604]: E0113 20:46:48.345573 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:48.345630 kubelet[2604]: E0113 20:46:48.345595 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:48.345630 kubelet[2604]: E0113 20:46:48.345528 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:48.345721 kubelet[2604]: E0113 20:46:48.345628 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:48.345721 kubelet[2604]: E0113 20:46:48.345682 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:48.345721 kubelet[2604]: E0113 20:46:48.345713 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:48.345851 kubelet[2604]: E0113 20:46:48.345754 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:48.419038 systemd[1]: run-netns-cni\x2df9d2368b\x2dedc8\x2de69d\x2d726f\x2ded1f3e23ad83.mount: Deactivated successfully. Jan 13 20:46:48.419385 systemd[1]: run-netns-cni\x2dd174902f\x2d229f\x2d53f2\x2db446\x2d61687649a2d3.mount: Deactivated successfully. Jan 13 20:46:49.155157 kubelet[2604]: I0113 20:46:49.155112 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3" Jan 13 20:46:49.155922 containerd[1502]: time="2025-01-13T20:46:49.155856888Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" Jan 13 20:46:49.156717 containerd[1502]: time="2025-01-13T20:46:49.156103560Z" level=info msg="Ensure that sandbox 71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3 in task-service has been cleanup successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.157052485Z" level=info msg="TearDown network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.157078677Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" returns successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.157731091Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.157819247Z" level=info msg="TearDown network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.157829156Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" returns successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.158246520Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.158318554Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:46:49.158444 containerd[1502]: time="2025-01-13T20:46:49.158331770Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:46:49.158897 containerd[1502]: time="2025-01-13T20:46:49.158580737Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:46:49.158897 containerd[1502]: time="2025-01-13T20:46:49.158644474Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:46:49.158897 containerd[1502]: time="2025-01-13T20:46:49.158652249Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:46:49.160554 systemd[1]: run-netns-cni\x2d6f4f3fba\x2d0694\x2d2ca0\x2d27a7\x2d525d1dd39f83.mount: Deactivated successfully. Jan 13 20:46:49.161238 containerd[1502]: time="2025-01-13T20:46:49.161059587Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:49.161238 containerd[1502]: time="2025-01-13T20:46:49.161176431Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:49.161238 containerd[1502]: time="2025-01-13T20:46:49.161187873Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:49.161720 containerd[1502]: time="2025-01-13T20:46:49.161670016Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:49.161850 containerd[1502]: time="2025-01-13T20:46:49.161801859Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:49.161850 containerd[1502]: time="2025-01-13T20:46:49.161824795Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:49.162575 containerd[1502]: time="2025-01-13T20:46:49.162417749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:6,}" Jan 13 20:46:49.162708 kubelet[2604]: I0113 20:46:49.162678 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99" Jan 13 20:46:49.163354 containerd[1502]: time="2025-01-13T20:46:49.163330843Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" Jan 13 20:46:49.163528 containerd[1502]: time="2025-01-13T20:46:49.163509348Z" level=info msg="Ensure that sandbox dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99 in task-service has been cleanup successfully" Jan 13 20:46:49.166364 systemd[1]: run-netns-cni\x2dac5290ca\x2d8319\x2d9abb\x2d7356\x2dc2be4dbd6cc8.mount: Deactivated successfully. Jan 13 20:46:49.166485 containerd[1502]: time="2025-01-13T20:46:49.166460282Z" level=info msg="TearDown network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" successfully" Jan 13 20:46:49.166485 containerd[1502]: time="2025-01-13T20:46:49.166477987Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" returns successfully" Jan 13 20:46:49.166772 containerd[1502]: time="2025-01-13T20:46:49.166733457Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:46:49.166896 containerd[1502]: time="2025-01-13T20:46:49.166856483Z" level=info msg="TearDown network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" successfully" Jan 13 20:46:49.166934 containerd[1502]: time="2025-01-13T20:46:49.166897414Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" returns successfully" Jan 13 20:46:49.167496 containerd[1502]: time="2025-01-13T20:46:49.167452924Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:46:49.167686 containerd[1502]: time="2025-01-13T20:46:49.167544266Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:46:49.167686 containerd[1502]: time="2025-01-13T20:46:49.167607693Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:46:49.168348 containerd[1502]: time="2025-01-13T20:46:49.168316960Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:46:49.168434 containerd[1502]: time="2025-01-13T20:46:49.168415847Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:46:49.168473 containerd[1502]: time="2025-01-13T20:46:49.168432691Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:46:49.168881 containerd[1502]: time="2025-01-13T20:46:49.168839542Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:49.169281 containerd[1502]: time="2025-01-13T20:46:49.169191565Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:49.169376 containerd[1502]: time="2025-01-13T20:46:49.169343850Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:49.169757 kubelet[2604]: I0113 20:46:49.169569 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d" Jan 13 20:46:49.169825 containerd[1502]: time="2025-01-13T20:46:49.169672366Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:49.169825 containerd[1502]: time="2025-01-13T20:46:49.169748197Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:49.169825 containerd[1502]: time="2025-01-13T20:46:49.169759139Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:49.170072 containerd[1502]: time="2025-01-13T20:46:49.170048216Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" Jan 13 20:46:49.170260 containerd[1502]: time="2025-01-13T20:46:49.170227825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:46:49.170436 containerd[1502]: time="2025-01-13T20:46:49.170410099Z" level=info msg="Ensure that sandbox 186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d in task-service has been cleanup successfully" Jan 13 20:46:49.170729 containerd[1502]: time="2025-01-13T20:46:49.170712773Z" level=info msg="TearDown network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" successfully" Jan 13 20:46:49.170795 containerd[1502]: time="2025-01-13T20:46:49.170782132Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" returns successfully" Jan 13 20:46:49.172770 containerd[1502]: time="2025-01-13T20:46:49.172750953Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:46:49.173133 containerd[1502]: time="2025-01-13T20:46:49.172963638Z" level=info msg="TearDown network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" successfully" Jan 13 20:46:49.173133 containerd[1502]: time="2025-01-13T20:46:49.172976364Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" returns successfully" Jan 13 20:46:49.173360 containerd[1502]: time="2025-01-13T20:46:49.173342815Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:46:49.173500 containerd[1502]: time="2025-01-13T20:46:49.173483427Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:46:49.173595 containerd[1502]: time="2025-01-13T20:46:49.173553917Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:46:49.173841 systemd[1]: run-netns-cni\x2d43206997\x2d1c50\x2d2d57\x2d2aec\x2dfe5693609c01.mount: Deactivated successfully. Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.173856261Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.174042112Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.174051941Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.174493173Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.174572842Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:49.174637 containerd[1502]: time="2025-01-13T20:46:49.174584304Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:49.175392 containerd[1502]: time="2025-01-13T20:46:49.175134423Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:49.175392 containerd[1502]: time="2025-01-13T20:46:49.175218972Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:49.175392 containerd[1502]: time="2025-01-13T20:46:49.175228762Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:49.175462 kubelet[2604]: I0113 20:46:49.174998 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d" Jan 13 20:46:49.175922 kubelet[2604]: E0113 20:46:49.175559 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:49.175982 containerd[1502]: time="2025-01-13T20:46:49.175602577Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" Jan 13 20:46:49.175982 containerd[1502]: time="2025-01-13T20:46:49.175766695Z" level=info msg="Ensure that sandbox 81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d in task-service has been cleanup successfully" Jan 13 20:46:49.175982 containerd[1502]: time="2025-01-13T20:46:49.175784912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:6,}" Jan 13 20:46:49.176180 containerd[1502]: time="2025-01-13T20:46:49.176092476Z" level=info msg="TearDown network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" successfully" Jan 13 20:46:49.176180 containerd[1502]: time="2025-01-13T20:46:49.176113568Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" returns successfully" Jan 13 20:46:49.176647 containerd[1502]: time="2025-01-13T20:46:49.176617083Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:46:49.176756 containerd[1502]: time="2025-01-13T20:46:49.176693637Z" level=info msg="TearDown network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" successfully" Jan 13 20:46:49.176756 containerd[1502]: time="2025-01-13T20:46:49.176705200Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" returns successfully" Jan 13 20:46:49.179265 containerd[1502]: time="2025-01-13T20:46:49.179083188Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:46:49.179265 containerd[1502]: time="2025-01-13T20:46:49.179170603Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:46:49.179265 containerd[1502]: time="2025-01-13T20:46:49.179208068Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:46:49.179120 systemd[1]: run-netns-cni\x2d10215d11\x2da809\x2d4c86\x2dffae\x2d0e3e31b34190.mount: Deactivated successfully. Jan 13 20:46:49.179498 containerd[1502]: time="2025-01-13T20:46:49.179469781Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:46:49.179581 containerd[1502]: time="2025-01-13T20:46:49.179559259Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:46:49.179581 containerd[1502]: time="2025-01-13T20:46:49.179578077Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:46:49.180004 containerd[1502]: time="2025-01-13T20:46:49.179972895Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:49.180054 containerd[1502]: time="2025-01-13T20:46:49.180045259Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:49.180104 containerd[1502]: time="2025-01-13T20:46:49.180054828Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:49.180606 containerd[1502]: time="2025-01-13T20:46:49.180577653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:49.181612 kubelet[2604]: I0113 20:46:49.181233 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de" Jan 13 20:46:49.181725 containerd[1502]: time="2025-01-13T20:46:49.181604863Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" Jan 13 20:46:49.184209 kubelet[2604]: I0113 20:46:49.184159 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d" Jan 13 20:46:49.184635 containerd[1502]: time="2025-01-13T20:46:49.184604945Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" Jan 13 20:46:49.184824 containerd[1502]: time="2025-01-13T20:46:49.184802670Z" level=info msg="Ensure that sandbox cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d in task-service has been cleanup successfully" Jan 13 20:46:49.185076 containerd[1502]: time="2025-01-13T20:46:49.185022068Z" level=info msg="TearDown network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" successfully" Jan 13 20:46:49.185171 containerd[1502]: time="2025-01-13T20:46:49.185141185Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" returns successfully" Jan 13 20:46:49.185456 containerd[1502]: time="2025-01-13T20:46:49.185432157Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:46:49.185566 containerd[1502]: time="2025-01-13T20:46:49.185535623Z" level=info msg="TearDown network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" successfully" Jan 13 20:46:49.185566 containerd[1502]: time="2025-01-13T20:46:49.185545954Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" returns successfully" Jan 13 20:46:49.186063 containerd[1502]: time="2025-01-13T20:46:49.185852245Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:46:49.186063 containerd[1502]: time="2025-01-13T20:46:49.185967996Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:46:49.186063 containerd[1502]: time="2025-01-13T20:46:49.185979500Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:46:49.186515 containerd[1502]: time="2025-01-13T20:46:49.186488586Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:46:49.186584 containerd[1502]: time="2025-01-13T20:46:49.186566602Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:46:49.186584 containerd[1502]: time="2025-01-13T20:46:49.186580219Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:46:49.187269 containerd[1502]: time="2025-01-13T20:46:49.187214495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:49.416373 systemd[1]: run-netns-cni\x2da2733c35\x2d09f2\x2d6ef9\x2dc100\x2d35f9a55b10c5.mount: Deactivated successfully. Jan 13 20:46:50.088733 containerd[1502]: time="2025-01-13T20:46:50.087912755Z" level=info msg="Ensure that sandbox 81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de in task-service has been cleanup successfully" Jan 13 20:46:50.093158 containerd[1502]: time="2025-01-13T20:46:50.093119136Z" level=info msg="TearDown network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" successfully" Jan 13 20:46:50.093209 systemd[1]: run-netns-cni\x2d6b77bf2f\x2d9b00\x2d3f7f\x2d6bf5\x2dedcbee5371e8.mount: Deactivated successfully. Jan 13 20:46:50.093842 containerd[1502]: time="2025-01-13T20:46:50.093390467Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" returns successfully" Jan 13 20:46:50.094343 containerd[1502]: time="2025-01-13T20:46:50.094318888Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:46:50.095238 containerd[1502]: time="2025-01-13T20:46:50.095205806Z" level=info msg="TearDown network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" successfully" Jan 13 20:46:50.095352 containerd[1502]: time="2025-01-13T20:46:50.095333381Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" returns successfully" Jan 13 20:46:50.097563 containerd[1502]: time="2025-01-13T20:46:50.097162007Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:46:50.097563 containerd[1502]: time="2025-01-13T20:46:50.097327447Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:46:50.097563 containerd[1502]: time="2025-01-13T20:46:50.097349952Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:46:50.099055 containerd[1502]: time="2025-01-13T20:46:50.099022827Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:46:50.100240 containerd[1502]: time="2025-01-13T20:46:50.100145676Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:46:50.101528 containerd[1502]: time="2025-01-13T20:46:50.100478110Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:46:50.102860 containerd[1502]: time="2025-01-13T20:46:50.102726582Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:50.103851 containerd[1502]: time="2025-01-13T20:46:50.103813810Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:50.103851 containerd[1502]: time="2025-01-13T20:46:50.103846546Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:50.109514 kubelet[2604]: E0113 20:46:50.108865 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:50.109739 containerd[1502]: time="2025-01-13T20:46:50.109682051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:50.229958 containerd[1502]: time="2025-01-13T20:46:50.229905856Z" level=error msg="Failed to destroy network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.230394 containerd[1502]: time="2025-01-13T20:46:50.230312988Z" level=error msg="encountered an error cleaning up failed sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.230394 containerd[1502]: time="2025-01-13T20:46:50.230376163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.230623 kubelet[2604]: E0113 20:46:50.230565 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.231205 kubelet[2604]: E0113 20:46:50.230932 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:50.231205 kubelet[2604]: E0113 20:46:50.230958 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zbbn" Jan 13 20:46:50.231205 kubelet[2604]: E0113 20:46:50.230999 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zbbn_calico-system(aab8c9ce-5c63-4682-8391-52de7028ab06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zbbn" podUID="aab8c9ce-5c63-4682-8391-52de7028ab06" Jan 13 20:46:50.376606 containerd[1502]: time="2025-01-13T20:46:50.376335362Z" level=error msg="Failed to destroy network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.376900 containerd[1502]: time="2025-01-13T20:46:50.376789618Z" level=error msg="encountered an error cleaning up failed sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.376900 containerd[1502]: time="2025-01-13T20:46:50.376852995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.378663 kubelet[2604]: E0113 20:46:50.378227 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.378663 kubelet[2604]: E0113 20:46:50.378297 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:50.378663 kubelet[2604]: E0113 20:46:50.378314 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" Jan 13 20:46:50.378823 kubelet[2604]: E0113 20:46:50.378357 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-rzkmh_calico-apiserver(16b2a8f4-c5cc-4088-8589-adcfeced0140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podUID="16b2a8f4-c5cc-4088-8589-adcfeced0140" Jan 13 20:46:50.380024 containerd[1502]: time="2025-01-13T20:46:50.379956193Z" level=error msg="Failed to destroy network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.380450 containerd[1502]: time="2025-01-13T20:46:50.380349576Z" level=error msg="encountered an error cleaning up failed sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.380450 containerd[1502]: time="2025-01-13T20:46:50.380408423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.380669 kubelet[2604]: E0113 20:46:50.380611 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.380773 kubelet[2604]: E0113 20:46:50.380734 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:50.380773 kubelet[2604]: E0113 20:46:50.380753 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" Jan 13 20:46:50.381137 kubelet[2604]: E0113 20:46:50.381013 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68d84f998-vczjr_calico-system(a8175534-edcd-43aa-9f18-0f7c717c2015)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podUID="a8175534-edcd-43aa-9f18-0f7c717c2015" Jan 13 20:46:50.423645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07-shm.mount: Deactivated successfully. Jan 13 20:46:50.424056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f-shm.mount: Deactivated successfully. Jan 13 20:46:50.424154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331873849.mount: Deactivated successfully. Jan 13 20:46:50.439940 containerd[1502]: time="2025-01-13T20:46:50.439862459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.441565 containerd[1502]: time="2025-01-13T20:46:50.441531788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:46:50.443482 containerd[1502]: time="2025-01-13T20:46:50.443441617Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.447148 containerd[1502]: time="2025-01-13T20:46:50.447037185Z" level=error msg="Failed to destroy network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.447476 containerd[1502]: time="2025-01-13T20:46:50.447455389Z" level=error msg="encountered an error cleaning up failed sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.447587 containerd[1502]: time="2025-01-13T20:46:50.447567623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.448815 kubelet[2604]: E0113 20:46:50.447835 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.448815 kubelet[2604]: E0113 20:46:50.447914 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:50.448815 kubelet[2604]: E0113 20:46:50.447934 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" Jan 13 20:46:50.448935 kubelet[2604]: E0113 20:46:50.447970 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5848c99678-gl2ks_calico-apiserver(6253a823-a0be-41d8-b9d8-038c03511377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podUID="6253a823-a0be-41d8-b9d8-038c03511377" Jan 13 20:46:50.450376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50-shm.mount: Deactivated successfully. Jan 13 20:46:50.457584 containerd[1502]: time="2025-01-13T20:46:50.457534696Z" level=error msg="Failed to destroy network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.458608 containerd[1502]: time="2025-01-13T20:46:50.458554219Z" level=error msg="encountered an error cleaning up failed sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.458670 containerd[1502]: time="2025-01-13T20:46:50.458643538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.458909 kubelet[2604]: E0113 20:46:50.458854 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.459109 kubelet[2604]: E0113 20:46:50.458997 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:50.459109 kubelet[2604]: E0113 20:46:50.459022 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kkgfd" Jan 13 20:46:50.459109 kubelet[2604]: E0113 20:46:50.459070 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kkgfd_kube-system(c07ee0db-ecb3-404d-8299-14d38d4f24e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kkgfd" podUID="c07ee0db-ecb3-404d-8299-14d38d4f24e5" Jan 13 20:46:50.459465 containerd[1502]: time="2025-01-13T20:46:50.459421198Z" level=error msg="Failed to destroy network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.459817 containerd[1502]: time="2025-01-13T20:46:50.459780414Z" level=error msg="encountered an error cleaning up failed sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.459855 containerd[1502]: time="2025-01-13T20:46:50.459826145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.460023 kubelet[2604]: E0113 20:46:50.459983 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:50.460063 kubelet[2604]: E0113 20:46:50.460027 2604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:50.460063 kubelet[2604]: E0113 20:46:50.460049 2604 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-k7xxm" Jan 13 20:46:50.460115 kubelet[2604]: E0113 20:46:50.460083 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-k7xxm_kube-system(c50d7bc1-3e5c-4530-b31c-b018c3931fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-k7xxm" podUID="c50d7bc1-3e5c-4530-b31c-b018c3931fbb" Jan 13 20:46:50.460774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f-shm.mount: Deactivated successfully. Jan 13 20:46:50.463364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee-shm.mount: Deactivated successfully. Jan 13 20:46:50.475105 containerd[1502]: time="2025-01-13T20:46:50.475065131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:50.475845 containerd[1502]: time="2025-01-13T20:46:50.475805637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.437328956s" Jan 13 20:46:50.475845 containerd[1502]: time="2025-01-13T20:46:50.475837931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:46:50.483164 containerd[1502]: time="2025-01-13T20:46:50.483126955Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:46:50.503313 containerd[1502]: time="2025-01-13T20:46:50.503241121Z" level=info msg="CreateContainer within sandbox \"537f91e028cb1e5467b34833604a3b99dc19694031a710ed6b75d16be7e3b1ce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e8b0323e14d99583c3dc72e3d740d7fbb2aa52c8e33f4d5d346092d981fca549\"" Jan 13 20:46:50.504002 containerd[1502]: time="2025-01-13T20:46:50.503775226Z" level=info msg="StartContainer for \"e8b0323e14d99583c3dc72e3d740d7fbb2aa52c8e33f4d5d346092d981fca549\"" Jan 13 20:46:50.574004 systemd[1]: Started cri-containerd-e8b0323e14d99583c3dc72e3d740d7fbb2aa52c8e33f4d5d346092d981fca549.scope - libcontainer container e8b0323e14d99583c3dc72e3d740d7fbb2aa52c8e33f4d5d346092d981fca549. Jan 13 20:46:50.608646 containerd[1502]: time="2025-01-13T20:46:50.608590007Z" level=info msg="StartContainer for \"e8b0323e14d99583c3dc72e3d740d7fbb2aa52c8e33f4d5d346092d981fca549\" returns successfully" Jan 13 20:46:50.671229 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:46:50.671709 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:46:51.005395 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:49058.service - OpenSSH per-connection server daemon (10.0.0.1:49058). Jan 13 20:46:51.065918 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 49058 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:51.067781 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:51.071775 systemd-logind[1488]: New session 10 of user core. Jan 13 20:46:51.084046 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:46:51.191560 kubelet[2604]: I0113 20:46:51.190337 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f" Jan 13 20:46:51.191701 containerd[1502]: time="2025-01-13T20:46:51.191072609Z" level=info msg="StopPodSandbox for \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\"" Jan 13 20:46:51.191701 containerd[1502]: time="2025-01-13T20:46:51.191424160Z" level=info msg="Ensure that sandbox b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f in task-service has been cleanup successfully" Jan 13 20:46:51.191774 containerd[1502]: time="2025-01-13T20:46:51.191751251Z" level=info msg="TearDown network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" successfully" Jan 13 20:46:51.191774 containerd[1502]: time="2025-01-13T20:46:51.191769858Z" level=info msg="StopPodSandbox for \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" returns successfully" Jan 13 20:46:51.192204 containerd[1502]: time="2025-01-13T20:46:51.192175916Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" Jan 13 20:46:51.192442 containerd[1502]: time="2025-01-13T20:46:51.192289211Z" level=info msg="TearDown network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" successfully" Jan 13 20:46:51.192442 containerd[1502]: time="2025-01-13T20:46:51.192303801Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" returns successfully" Jan 13 20:46:51.192710 containerd[1502]: time="2025-01-13T20:46:51.192684268Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:46:51.192849 containerd[1502]: time="2025-01-13T20:46:51.192773757Z" level=info msg="TearDown network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" successfully" Jan 13 20:46:51.192849 containerd[1502]: time="2025-01-13T20:46:51.192785770Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" returns successfully" Jan 13 20:46:51.193709 containerd[1502]: time="2025-01-13T20:46:51.193642166Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:46:51.193794 containerd[1502]: time="2025-01-13T20:46:51.193769319Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:46:51.193794 containerd[1502]: time="2025-01-13T20:46:51.193784228Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:46:51.194273 containerd[1502]: time="2025-01-13T20:46:51.194240557Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:46:51.194635 containerd[1502]: time="2025-01-13T20:46:51.194345717Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:46:51.194635 containerd[1502]: time="2025-01-13T20:46:51.194363312Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:46:51.194911 kubelet[2604]: E0113 20:46:51.194862 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:51.195154 containerd[1502]: time="2025-01-13T20:46:51.195115459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:51.199148 kubelet[2604]: I0113 20:46:51.199124 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6" Jan 13 20:46:51.199746 containerd[1502]: time="2025-01-13T20:46:51.199685048Z" level=info msg="StopPodSandbox for \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\"" Jan 13 20:46:51.202924 containerd[1502]: time="2025-01-13T20:46:51.200639057Z" level=info msg="Ensure that sandbox 26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6 in task-service has been cleanup successfully" Jan 13 20:46:51.202924 containerd[1502]: time="2025-01-13T20:46:51.201949657Z" level=info msg="TearDown network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" successfully" Jan 13 20:46:51.202924 containerd[1502]: time="2025-01-13T20:46:51.201968995Z" level=info msg="StopPodSandbox for \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" returns successfully" Jan 13 20:46:51.203449 containerd[1502]: time="2025-01-13T20:46:51.203400996Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" Jan 13 20:46:51.203542 containerd[1502]: time="2025-01-13T20:46:51.203513450Z" level=info msg="TearDown network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" successfully" Jan 13 20:46:51.203542 containerd[1502]: time="2025-01-13T20:46:51.203539492Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" returns successfully" Jan 13 20:46:51.203938 containerd[1502]: time="2025-01-13T20:46:51.203755061Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:46:51.203938 containerd[1502]: time="2025-01-13T20:46:51.203833036Z" level=info msg="TearDown network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" successfully" Jan 13 20:46:51.203938 containerd[1502]: time="2025-01-13T20:46:51.203842886Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" returns successfully" Jan 13 20:46:51.204814 containerd[1502]: time="2025-01-13T20:46:51.204530435Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:46:51.204814 containerd[1502]: time="2025-01-13T20:46:51.204603511Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:46:51.204814 containerd[1502]: time="2025-01-13T20:46:51.204611988Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:46:51.205741 kubelet[2604]: I0113 20:46:51.205273 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07" Jan 13 20:46:51.205889 containerd[1502]: time="2025-01-13T20:46:51.205047004Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:46:51.205976 containerd[1502]: time="2025-01-13T20:46:51.205611578Z" level=info msg="StopPodSandbox for \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\"" Jan 13 20:46:51.207222 containerd[1502]: time="2025-01-13T20:46:51.206056865Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:46:51.207222 containerd[1502]: time="2025-01-13T20:46:51.207213799Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:46:51.207307 containerd[1502]: time="2025-01-13T20:46:51.206812479Z" level=info msg="Ensure that sandbox dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07 in task-service has been cleanup successfully" Jan 13 20:46:51.207891 containerd[1502]: time="2025-01-13T20:46:51.207531972Z" level=info msg="TearDown network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" successfully" Jan 13 20:46:51.207891 containerd[1502]: time="2025-01-13T20:46:51.207554917Z" level=info msg="StopPodSandbox for \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" returns successfully" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208342666Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208435782Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208445571Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208565670Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208659307Z" level=info msg="TearDown network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" successfully" Jan 13 20:46:51.208725 containerd[1502]: time="2025-01-13T20:46:51.208671110Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" returns successfully" Jan 13 20:46:51.208926 containerd[1502]: time="2025-01-13T20:46:51.208894154Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:46:51.208998 containerd[1502]: time="2025-01-13T20:46:51.208978833Z" level=info msg="TearDown network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" successfully" Jan 13 20:46:51.208998 containerd[1502]: time="2025-01-13T20:46:51.208994374Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" returns successfully" Jan 13 20:46:51.209093 containerd[1502]: time="2025-01-13T20:46:51.209075234Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:46:51.209162 containerd[1502]: time="2025-01-13T20:46:51.209145184Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:46:51.209162 containerd[1502]: time="2025-01-13T20:46:51.209157368Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:46:51.209742 containerd[1502]: time="2025-01-13T20:46:51.209713745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:7,}" Jan 13 20:46:51.210063 containerd[1502]: time="2025-01-13T20:46:51.210035236Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:46:51.210238 containerd[1502]: time="2025-01-13T20:46:51.210132540Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:46:51.210238 containerd[1502]: time="2025-01-13T20:46:51.210153421Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:46:51.210593 containerd[1502]: time="2025-01-13T20:46:51.210478448Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:46:51.211010 containerd[1502]: time="2025-01-13T20:46:51.210862052Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:46:51.211010 containerd[1502]: time="2025-01-13T20:46:51.210909015Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:46:51.211275 containerd[1502]: time="2025-01-13T20:46:51.211250155Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:46:51.211364 containerd[1502]: time="2025-01-13T20:46:51.211343361Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:46:51.211411 containerd[1502]: time="2025-01-13T20:46:51.211360204Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:46:51.211972 containerd[1502]: time="2025-01-13T20:46:51.211934909Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:46:51.212104 containerd[1502]: time="2025-01-13T20:46:51.212079025Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:46:51.212104 containerd[1502]: time="2025-01-13T20:46:51.212095668Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:46:51.212454 kubelet[2604]: I0113 20:46:51.212424 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee" Jan 13 20:46:51.213339 containerd[1502]: time="2025-01-13T20:46:51.213304255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:7,}" Jan 13 20:46:51.214362 kubelet[2604]: I0113 20:46:51.214138 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-86jnl" podStartSLOduration=1.8851189179999999 podStartE2EDuration="21.214125701s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:31.147441225 +0000 UTC m=+11.259056312" lastFinishedPulling="2025-01-13 20:46:50.476448008 +0000 UTC m=+30.588063095" observedRunningTime="2025-01-13 20:46:51.213284516 +0000 UTC m=+31.324899603" watchObservedRunningTime="2025-01-13 20:46:51.214125701 +0000 UTC m=+31.325740788" Jan 13 20:46:51.215243 containerd[1502]: time="2025-01-13T20:46:51.214848250Z" level=info msg="StopPodSandbox for \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\"" Jan 13 20:46:51.215243 containerd[1502]: time="2025-01-13T20:46:51.215083057Z" level=info msg="Ensure that sandbox 63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee in task-service has been cleanup successfully" Jan 13 20:46:51.215395 containerd[1502]: time="2025-01-13T20:46:51.215374718Z" level=info msg="TearDown network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" successfully" Jan 13 20:46:51.215468 containerd[1502]: time="2025-01-13T20:46:51.215452803Z" level=info msg="StopPodSandbox for \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" returns successfully" Jan 13 20:46:51.216212 containerd[1502]: time="2025-01-13T20:46:51.216001946Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" Jan 13 20:46:51.216402 containerd[1502]: time="2025-01-13T20:46:51.216269549Z" level=info msg="TearDown network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" successfully" Jan 13 20:46:51.216402 containerd[1502]: time="2025-01-13T20:46:51.216281112Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" returns successfully" Jan 13 20:46:51.217420 containerd[1502]: time="2025-01-13T20:46:51.217223709Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:46:51.217420 containerd[1502]: time="2025-01-13T20:46:51.217313477Z" level=info msg="TearDown network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" successfully" Jan 13 20:46:51.217420 containerd[1502]: time="2025-01-13T20:46:51.217339229Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" returns successfully" Jan 13 20:46:51.217942 containerd[1502]: time="2025-01-13T20:46:51.217899704Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:46:51.218081 containerd[1502]: time="2025-01-13T20:46:51.218055976Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:46:51.218081 containerd[1502]: time="2025-01-13T20:46:51.218071928Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:46:51.218999 containerd[1502]: time="2025-01-13T20:46:51.218959315Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:46:51.219562 containerd[1502]: time="2025-01-13T20:46:51.219057770Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:46:51.219562 containerd[1502]: time="2025-01-13T20:46:51.219069434Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:46:51.220667 containerd[1502]: time="2025-01-13T20:46:51.220635683Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:46:51.220727 kubelet[2604]: I0113 20:46:51.220644 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50" Jan 13 20:46:51.221315 containerd[1502]: time="2025-01-13T20:46:51.220969347Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:46:51.221315 containerd[1502]: time="2025-01-13T20:46:51.221275887Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:46:51.221554 containerd[1502]: time="2025-01-13T20:46:51.221495455Z" level=info msg="StopPodSandbox for \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\"" Jan 13 20:46:51.222024 containerd[1502]: time="2025-01-13T20:46:51.221993787Z" level=info msg="Ensure that sandbox 8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50 in task-service has been cleanup successfully" Jan 13 20:46:51.222555 containerd[1502]: time="2025-01-13T20:46:51.222223845Z" level=info msg="TearDown network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" successfully" Jan 13 20:46:51.222555 containerd[1502]: time="2025-01-13T20:46:51.222249044Z" level=info msg="StopPodSandbox for \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" returns successfully" Jan 13 20:46:51.222632 containerd[1502]: time="2025-01-13T20:46:51.222590735Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.222682839Z" level=info msg="TearDown network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.222698680Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" returns successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.222757888Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.222897396Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.222914430Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.223262382Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.223353644Z" level=info msg="TearDown network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.223364656Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" returns successfully" Jan 13 20:46:51.225784 containerd[1502]: time="2025-01-13T20:46:51.225700146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:7,}" Jan 13 20:46:51.226138 kubelet[2604]: E0113 20:46:51.223221 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:51.226174 containerd[1502]: time="2025-01-13T20:46:51.225987899Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:46:51.226174 containerd[1502]: time="2025-01-13T20:46:51.226076656Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:46:51.226174 containerd[1502]: time="2025-01-13T20:46:51.226086726Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:46:51.227002 containerd[1502]: time="2025-01-13T20:46:51.226924163Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:46:51.227067 containerd[1502]: time="2025-01-13T20:46:51.227050124Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:46:51.227067 containerd[1502]: time="2025-01-13T20:46:51.227064503Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:46:51.228699 sshd[4958]: Connection closed by 10.0.0.1 port 49058 Jan 13 20:46:51.229250 containerd[1502]: time="2025-01-13T20:46:51.228955278Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:46:51.229250 containerd[1502]: time="2025-01-13T20:46:51.229077491Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:46:51.229250 containerd[1502]: time="2025-01-13T20:46:51.229092110Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:46:51.229133 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:51.229818 containerd[1502]: time="2025-01-13T20:46:51.229795060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:46:51.230286 kubelet[2604]: I0113 20:46:51.230172 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f" Jan 13 20:46:51.231011 containerd[1502]: time="2025-01-13T20:46:51.230619782Z" level=info msg="StopPodSandbox for \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\"" Jan 13 20:46:51.231011 containerd[1502]: time="2025-01-13T20:46:51.230781093Z" level=info msg="Ensure that sandbox afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f in task-service has been cleanup successfully" Jan 13 20:46:51.231484 containerd[1502]: time="2025-01-13T20:46:51.231396589Z" level=info msg="TearDown network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" successfully" Jan 13 20:46:51.231484 containerd[1502]: time="2025-01-13T20:46:51.231410206Z" level=info msg="StopPodSandbox for \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" returns successfully" Jan 13 20:46:51.231938 containerd[1502]: time="2025-01-13T20:46:51.231745153Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" Jan 13 20:46:51.231938 containerd[1502]: time="2025-01-13T20:46:51.231826455Z" level=info msg="TearDown network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" successfully" Jan 13 20:46:51.231938 containerd[1502]: time="2025-01-13T20:46:51.231857797Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" returns successfully" Jan 13 20:46:51.232439 containerd[1502]: time="2025-01-13T20:46:51.232409655Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:46:51.232526 containerd[1502]: time="2025-01-13T20:46:51.232486559Z" level=info msg="TearDown network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" successfully" Jan 13 20:46:51.232526 containerd[1502]: time="2025-01-13T20:46:51.232518492Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" returns successfully" Jan 13 20:46:51.232756 containerd[1502]: time="2025-01-13T20:46:51.232735494Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:46:51.232852 containerd[1502]: time="2025-01-13T20:46:51.232833981Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:46:51.232852 containerd[1502]: time="2025-01-13T20:46:51.232847417Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:46:51.233104 containerd[1502]: time="2025-01-13T20:46:51.233066664Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:46:51.233207 containerd[1502]: time="2025-01-13T20:46:51.233174478Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:46:51.233207 containerd[1502]: time="2025-01-13T20:46:51.233185832Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:46:51.233814 containerd[1502]: time="2025-01-13T20:46:51.233457291Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:46:51.233814 containerd[1502]: time="2025-01-13T20:46:51.233600637Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:46:51.233814 containerd[1502]: time="2025-01-13T20:46:51.233614164Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:46:51.234082 kubelet[2604]: E0113 20:46:51.234064 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:51.235180 containerd[1502]: time="2025-01-13T20:46:51.234709906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:6,}" Jan 13 20:46:51.235074 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:49058.service: Deactivated successfully. Jan 13 20:46:51.237138 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:46:51.238231 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:46:51.239194 systemd-logind[1488]: Removed session 10. Jan 13 20:46:51.429077 systemd[1]: run-netns-cni\x2d13509b31\x2d93f9\x2d9eb4\x2de35e\x2d0bfb7ed17d2d.mount: Deactivated successfully. Jan 13 20:46:51.429212 systemd[1]: run-netns-cni\x2da46e4ac6\x2d8adb\x2d98d0\x2d903b\x2d6b22aad4cbcb.mount: Deactivated successfully. Jan 13 20:46:51.429305 systemd[1]: run-netns-cni\x2dfc49b263\x2d6843\x2dff54\x2d6975\x2d9caf89f06b84.mount: Deactivated successfully. Jan 13 20:46:51.429411 systemd[1]: run-netns-cni\x2db6648084\x2de84a\x2dd461\x2dd67c\x2da3a2a4dcdeeb.mount: Deactivated successfully. Jan 13 20:46:51.429501 systemd[1]: run-netns-cni\x2d5a6fced5\x2dd8ac\x2d875d\x2d50ab\x2d51d0ecadcbc2.mount: Deactivated successfully. Jan 13 20:46:51.429586 systemd[1]: run-netns-cni\x2da14dceed\x2d400f\x2d4a6e\x2d08ba\x2dd2d244766d38.mount: Deactivated successfully. Jan 13 20:46:51.482552 systemd-networkd[1418]: calicdf2a587c2c: Link UP Jan 13 20:46:51.483285 systemd-networkd[1418]: calicdf2a587c2c: Gained carrier Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.285 [INFO][4983] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.295 [INFO][4983] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0 calico-kube-controllers-68d84f998- calico-system a8175534-edcd-43aa-9f18-0f7c717c2015 742 0 2025-01-13 20:46:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68d84f998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68d84f998-vczjr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicdf2a587c2c [] []}} ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.295 [INFO][4983] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.425 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" HandleID="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Workload="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.443 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" HandleID="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Workload="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68d84f998-vczjr", "timestamp":"2025-01-13 20:46:51.425351012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.444 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.444 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.444 [INFO][5014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.447 [INFO][5014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.453 [INFO][5014] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.459 [INFO][5014] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.461 [INFO][5014] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.463 [INFO][5014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.463 [INFO][5014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.464 [INFO][5014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.467 [INFO][5014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" host="localhost" Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:51.494833 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" HandleID="k8s-pod-network.cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Workload="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.475 [INFO][4983] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0", GenerateName:"calico-kube-controllers-68d84f998-", Namespace:"calico-system", SelfLink:"", UID:"a8175534-edcd-43aa-9f18-0f7c717c2015", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d84f998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68d84f998-vczjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf2a587c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.475 [INFO][4983] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.475 [INFO][4983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdf2a587c2c ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.483 [INFO][4983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.483 [INFO][4983] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0", GenerateName:"calico-kube-controllers-68d84f998-", Namespace:"calico-system", SelfLink:"", UID:"a8175534-edcd-43aa-9f18-0f7c717c2015", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68d84f998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b", Pod:"calico-kube-controllers-68d84f998-vczjr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf2a587c2c", MAC:"c2:3c:eb:0c:59:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.495917 containerd[1502]: 2025-01-13 20:46:51.492 [INFO][4983] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b" Namespace="calico-system" Pod="calico-kube-controllers-68d84f998-vczjr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68d84f998--vczjr-eth0" Jan 13 20:46:51.541259 containerd[1502]: time="2025-01-13T20:46:51.541111543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:51.541420 containerd[1502]: time="2025-01-13T20:46:51.541379286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:51.541530 containerd[1502]: time="2025-01-13T20:46:51.541422061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.542019 containerd[1502]: time="2025-01-13T20:46:51.541946996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.567030 systemd[1]: Started cri-containerd-cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b.scope - libcontainer container cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b. Jan 13 20:46:51.578679 systemd-networkd[1418]: cali01f7b6470c7: Link UP Jan 13 20:46:51.579517 systemd-networkd[1418]: cali01f7b6470c7: Gained carrier Jan 13 20:46:51.581432 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.260 [INFO][4971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.284 [INFO][4971] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4zbbn-eth0 csi-node-driver- calico-system aab8c9ce-5c63-4682-8391-52de7028ab06 639 0 2025-01-13 20:46:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4zbbn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali01f7b6470c7 [] []}} ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.284 [INFO][4971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.425 [INFO][5015] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" HandleID="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Workload="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.445 [INFO][5015] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" HandleID="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Workload="localhost-k8s-csi--node--driver--4zbbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4zbbn", "timestamp":"2025-01-13 20:46:51.425181044 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.445 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.472 [INFO][5015] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.547 [INFO][5015] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.551 [INFO][5015] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.560 [INFO][5015] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.562 [INFO][5015] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.564 [INFO][5015] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.564 [INFO][5015] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.565 [INFO][5015] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.569 [INFO][5015] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.573 [INFO][5015] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.573 [INFO][5015] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" host="localhost" Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.573 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:51.590006 containerd[1502]: 2025-01-13 20:46:51.574 [INFO][5015] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" HandleID="k8s-pod-network.0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Workload="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.576 [INFO][4971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zbbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aab8c9ce-5c63-4682-8391-52de7028ab06", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4zbbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01f7b6470c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.576 [INFO][4971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.576 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01f7b6470c7 ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.578 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.578 [INFO][4971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zbbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aab8c9ce-5c63-4682-8391-52de7028ab06", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d", Pod:"csi-node-driver-4zbbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01f7b6470c7", MAC:"5e:32:34:ce:2b:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.590554 containerd[1502]: 2025-01-13 20:46:51.587 [INFO][4971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d" Namespace="calico-system" Pod="csi-node-driver-4zbbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zbbn-eth0" Jan 13 20:46:51.611003 containerd[1502]: time="2025-01-13T20:46:51.610960800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68d84f998-vczjr,Uid:a8175534-edcd-43aa-9f18-0f7c717c2015,Namespace:calico-system,Attempt:7,} returns sandbox id \"cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b\"" Jan 13 20:46:51.612584 containerd[1502]: time="2025-01-13T20:46:51.612511929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:51.613353 containerd[1502]: time="2025-01-13T20:46:51.612580405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:51.613353 containerd[1502]: time="2025-01-13T20:46:51.612600535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.613353 containerd[1502]: time="2025-01-13T20:46:51.612768029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.613353 containerd[1502]: time="2025-01-13T20:46:51.612953368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:46:51.639000 systemd[1]: Started cri-containerd-0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d.scope - libcontainer container 0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d. Jan 13 20:46:51.651344 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:51.662974 containerd[1502]: time="2025-01-13T20:46:51.662918533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zbbn,Uid:aab8c9ce-5c63-4682-8391-52de7028ab06,Namespace:calico-system,Attempt:5,} returns sandbox id \"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d\"" Jan 13 20:46:51.678473 systemd-networkd[1418]: cali75014c3960f: Link UP Jan 13 20:46:51.678693 systemd-networkd[1418]: cali75014c3960f: Gained carrier Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.285 [INFO][4989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.300 [INFO][4989] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0 calico-apiserver-5848c99678- calico-apiserver 16b2a8f4-c5cc-4088-8589-adcfeced0140 740 0 2025-01-13 20:46:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5848c99678 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5848c99678-rzkmh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali75014c3960f [] []}} ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.300 [INFO][4989] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.425 [INFO][5016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" HandleID="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Workload="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.448 [INFO][5016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" HandleID="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Workload="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000408be0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5848c99678-rzkmh", "timestamp":"2025-01-13 20:46:51.424900234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.449 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.573 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.574 [INFO][5016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.647 [INFO][5016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.651 [INFO][5016] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.659 [INFO][5016] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.663 [INFO][5016] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.665 [INFO][5016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.665 [INFO][5016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.666 [INFO][5016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182 Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.670 [INFO][5016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" host="localhost" Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:51.687897 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" HandleID="k8s-pod-network.01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Workload="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.676 [INFO][4989] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0", GenerateName:"calico-apiserver-5848c99678-", Namespace:"calico-apiserver", SelfLink:"", UID:"16b2a8f4-c5cc-4088-8589-adcfeced0140", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5848c99678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5848c99678-rzkmh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75014c3960f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.677 [INFO][4989] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.677 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75014c3960f ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.678 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.678 [INFO][4989] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0", GenerateName:"calico-apiserver-5848c99678-", Namespace:"calico-apiserver", SelfLink:"", UID:"16b2a8f4-c5cc-4088-8589-adcfeced0140", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5848c99678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182", Pod:"calico-apiserver-5848c99678-rzkmh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75014c3960f", MAC:"fa:4d:7b:2b:2b:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.688680 containerd[1502]: 2025-01-13 20:46:51.685 [INFO][4989] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-rzkmh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--rzkmh-eth0" Jan 13 20:46:51.712655 containerd[1502]: time="2025-01-13T20:46:51.712571117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:51.712655 containerd[1502]: time="2025-01-13T20:46:51.712618010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:51.712655 containerd[1502]: time="2025-01-13T20:46:51.712628462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.712833 containerd[1502]: time="2025-01-13T20:46:51.712703651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.733078 systemd[1]: Started cri-containerd-01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182.scope - libcontainer container 01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182. Jan 13 20:46:51.745147 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:51.772657 containerd[1502]: time="2025-01-13T20:46:51.772602947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-rzkmh,Uid:16b2a8f4-c5cc-4088-8589-adcfeced0140,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182\"" Jan 13 20:46:51.783169 systemd-networkd[1418]: cali8e97887343f: Link UP Jan 13 20:46:51.783525 systemd-networkd[1418]: cali8e97887343f: Gained carrier Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.370 [INFO][5032] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.386 [INFO][5032] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0 coredns-6f6b679f8f- kube-system c50d7bc1-3e5c-4530-b31c-b018c3931fbb 739 0 2025-01-13 20:46:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-k7xxm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e97887343f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.386 [INFO][5032] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.448 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" HandleID="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Workload="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.459 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" HandleID="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Workload="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-k7xxm", "timestamp":"2025-01-13 20:46:51.44848246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.459 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.674 [INFO][5074] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.749 [INFO][5074] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.753 [INFO][5074] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.758 [INFO][5074] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.760 [INFO][5074] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.763 [INFO][5074] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.763 [INFO][5074] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.767 [INFO][5074] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.773 [INFO][5074] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5074] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5074] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" host="localhost" Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:51.794675 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" HandleID="k8s-pod-network.e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Workload="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.781 [INFO][5032] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c50d7bc1-3e5c-4530-b31c-b018c3931fbb", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-k7xxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e97887343f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.781 [INFO][5032] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.781 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e97887343f ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.783 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.783 [INFO][5032] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c50d7bc1-3e5c-4530-b31c-b018c3931fbb", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b", Pod:"coredns-6f6b679f8f-k7xxm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e97887343f", MAC:"b6:69:b0:7b:ff:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.795210 containerd[1502]: 2025-01-13 20:46:51.791 [INFO][5032] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b" Namespace="kube-system" Pod="coredns-6f6b679f8f-k7xxm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--k7xxm-eth0" Jan 13 20:46:51.816619 containerd[1502]: time="2025-01-13T20:46:51.816082750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:51.816619 containerd[1502]: time="2025-01-13T20:46:51.816558286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:51.816619 containerd[1502]: time="2025-01-13T20:46:51.816577254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.816786 containerd[1502]: time="2025-01-13T20:46:51.816687424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.835032 systemd[1]: Started cri-containerd-e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b.scope - libcontainer container e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b. Jan 13 20:46:51.848727 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:51.876216 containerd[1502]: time="2025-01-13T20:46:51.876169909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-k7xxm,Uid:c50d7bc1-3e5c-4530-b31c-b018c3931fbb,Namespace:kube-system,Attempt:7,} returns sandbox id \"e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b\"" Jan 13 20:46:51.878255 kubelet[2604]: E0113 20:46:51.878031 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:51.880480 containerd[1502]: time="2025-01-13T20:46:51.880302348Z" level=info msg="CreateContainer within sandbox \"e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:51.886919 systemd-networkd[1418]: cali798db1d6521: Link UP Jan 13 20:46:51.888143 systemd-networkd[1418]: cali798db1d6521: Gained carrier Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.403 [INFO][5055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.418 [INFO][5055] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0 coredns-6f6b679f8f- kube-system c07ee0db-ecb3-404d-8299-14d38d4f24e5 737 0 2025-01-13 20:46:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-kkgfd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali798db1d6521 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.418 [INFO][5055] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.456 [INFO][5092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" HandleID="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Workload="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.464 [INFO][5092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" HandleID="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Workload="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e04e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-kkgfd", "timestamp":"2025-01-13 20:46:51.456984789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.464 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.778 [INFO][5092] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.848 [INFO][5092] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.854 [INFO][5092] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.859 [INFO][5092] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.860 [INFO][5092] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.862 [INFO][5092] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.862 [INFO][5092] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.864 [INFO][5092] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.871 [INFO][5092] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.879 [INFO][5092] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.879 [INFO][5092] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" host="localhost" Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.879 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:51.899804 containerd[1502]: 2025-01-13 20:46:51.879 [INFO][5092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" HandleID="k8s-pod-network.89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Workload="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.883 [INFO][5055] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c07ee0db-ecb3-404d-8299-14d38d4f24e5", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-kkgfd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali798db1d6521", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.883 [INFO][5055] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.883 [INFO][5055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali798db1d6521 ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.888 [INFO][5055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.888 [INFO][5055] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c07ee0db-ecb3-404d-8299-14d38d4f24e5", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c", Pod:"coredns-6f6b679f8f-kkgfd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali798db1d6521", MAC:"36:4c:00:e9:00:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:51.900359 containerd[1502]: 2025-01-13 20:46:51.896 [INFO][5055] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c" Namespace="kube-system" Pod="coredns-6f6b679f8f-kkgfd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--kkgfd-eth0" Jan 13 20:46:51.906141 containerd[1502]: time="2025-01-13T20:46:51.906089332Z" level=info msg="CreateContainer within sandbox \"e8cb3c46f7c6e2ab6b6ce70a50478e8dae4665328509a00847ab818e3e52304b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a15629f8f005178f3aa9e04ff41d82afed2372b675374eeba3ca75d187527c4\"" Jan 13 20:46:51.906811 containerd[1502]: time="2025-01-13T20:46:51.906677183Z" level=info msg="StartContainer for \"8a15629f8f005178f3aa9e04ff41d82afed2372b675374eeba3ca75d187527c4\"" Jan 13 20:46:51.923716 containerd[1502]: time="2025-01-13T20:46:51.923582112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:51.923716 containerd[1502]: time="2025-01-13T20:46:51.923658875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:51.923956 containerd[1502]: time="2025-01-13T20:46:51.923688584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.923956 containerd[1502]: time="2025-01-13T20:46:51.923826779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:51.938025 systemd[1]: Started cri-containerd-8a15629f8f005178f3aa9e04ff41d82afed2372b675374eeba3ca75d187527c4.scope - libcontainer container 8a15629f8f005178f3aa9e04ff41d82afed2372b675374eeba3ca75d187527c4. Jan 13 20:46:51.941584 systemd[1]: Started cri-containerd-89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c.scope - libcontainer container 89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c. Jan 13 20:46:51.954496 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:51.986098 containerd[1502]: time="2025-01-13T20:46:51.985951776Z" level=info msg="StartContainer for \"8a15629f8f005178f3aa9e04ff41d82afed2372b675374eeba3ca75d187527c4\" returns successfully" Jan 13 20:46:51.986250 containerd[1502]: time="2025-01-13T20:46:51.986232986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkgfd,Uid:c07ee0db-ecb3-404d-8299-14d38d4f24e5,Namespace:kube-system,Attempt:6,} returns sandbox id \"89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c\"" Jan 13 20:46:51.987733 kubelet[2604]: E0113 20:46:51.987310 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:51.990021 containerd[1502]: time="2025-01-13T20:46:51.989999546Z" level=info msg="CreateContainer within sandbox \"89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:51.991611 systemd-networkd[1418]: cali32d0da45d3d: Link UP Jan 13 20:46:51.992807 systemd-networkd[1418]: cali32d0da45d3d: Gained carrier Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.382 [INFO][5034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.396 [INFO][5034] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0 calico-apiserver-5848c99678- calico-apiserver 6253a823-a0be-41d8-b9d8-038c03511377 741 0 2025-01-13 20:46:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5848c99678 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5848c99678-gl2ks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32d0da45d3d [] []}} ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.397 [INFO][5034] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.467 [INFO][5085] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" HandleID="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Workload="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.544 [INFO][5085] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" HandleID="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Workload="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4a40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5848c99678-gl2ks", "timestamp":"2025-01-13 20:46:51.467708791 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.544 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.879 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.880 [INFO][5085] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.948 [INFO][5085] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.955 [INFO][5085] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.960 [INFO][5085] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.962 [INFO][5085] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.964 [INFO][5085] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.964 [INFO][5085] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.965 [INFO][5085] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1 Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.970 [INFO][5085] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.980 [INFO][5085] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.980 [INFO][5085] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" host="localhost" Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.980 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:52.006157 containerd[1502]: 2025-01-13 20:46:51.980 [INFO][5085] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" HandleID="k8s-pod-network.2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Workload="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:51.985 [INFO][5034] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0", GenerateName:"calico-apiserver-5848c99678-", Namespace:"calico-apiserver", SelfLink:"", UID:"6253a823-a0be-41d8-b9d8-038c03511377", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5848c99678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5848c99678-gl2ks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d0da45d3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:51.986 [INFO][5034] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:51.986 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32d0da45d3d ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:51.992 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:51.993 [INFO][5034] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0", GenerateName:"calico-apiserver-5848c99678-", Namespace:"calico-apiserver", SelfLink:"", UID:"6253a823-a0be-41d8-b9d8-038c03511377", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5848c99678", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1", Pod:"calico-apiserver-5848c99678-gl2ks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32d0da45d3d", MAC:"aa:91:a1:35:51:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:52.007196 containerd[1502]: 2025-01-13 20:46:52.001 [INFO][5034] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1" Namespace="calico-apiserver" Pod="calico-apiserver-5848c99678-gl2ks" WorkloadEndpoint="localhost-k8s-calico--apiserver--5848c99678--gl2ks-eth0" Jan 13 20:46:52.018468 containerd[1502]: time="2025-01-13T20:46:52.018417966Z" level=info msg="CreateContainer within sandbox \"89e3095775c035deb65ac1e91b51247cbbb97be4045ed67b745685a6f99c1b2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97c46a9c51ff301d79b7eaac0fc7d7b421c046d39cf88b6cbe21dfe7070a8ffd\"" Jan 13 20:46:52.020856 containerd[1502]: time="2025-01-13T20:46:52.020730585Z" level=info msg="StartContainer for \"97c46a9c51ff301d79b7eaac0fc7d7b421c046d39cf88b6cbe21dfe7070a8ffd\"" Jan 13 20:46:52.028587 containerd[1502]: time="2025-01-13T20:46:52.028409735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:52.029124 containerd[1502]: time="2025-01-13T20:46:52.029058285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:52.029290 containerd[1502]: time="2025-01-13T20:46:52.029140358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:52.029336 containerd[1502]: time="2025-01-13T20:46:52.029302631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:52.050406 systemd[1]: Started cri-containerd-2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1.scope - libcontainer container 2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1. Jan 13 20:46:52.053851 systemd[1]: Started cri-containerd-97c46a9c51ff301d79b7eaac0fc7d7b421c046d39cf88b6cbe21dfe7070a8ffd.scope - libcontainer container 97c46a9c51ff301d79b7eaac0fc7d7b421c046d39cf88b6cbe21dfe7070a8ffd. Jan 13 20:46:52.103079 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:52.199566 containerd[1502]: time="2025-01-13T20:46:52.199377200Z" level=info msg="StartContainer for \"97c46a9c51ff301d79b7eaac0fc7d7b421c046d39cf88b6cbe21dfe7070a8ffd\" returns successfully" Jan 13 20:46:52.200941 containerd[1502]: time="2025-01-13T20:46:52.199428441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5848c99678-gl2ks,Uid:6253a823-a0be-41d8-b9d8-038c03511377,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1\"" Jan 13 20:46:52.252785 kubelet[2604]: E0113 20:46:52.252739 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:52.263332 kubelet[2604]: E0113 20:46:52.263293 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:52.265622 kubelet[2604]: I0113 20:46:52.265597 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:52.266396 kubelet[2604]: E0113 20:46:52.266335 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:52.290146 kubelet[2604]: I0113 20:46:52.289906 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-k7xxm" podStartSLOduration=27.289885394 podStartE2EDuration="27.289885394s" podCreationTimestamp="2025-01-13 20:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:52.276628312 +0000 UTC m=+32.388243399" watchObservedRunningTime="2025-01-13 20:46:52.289885394 +0000 UTC m=+32.401500481" Jan 13 20:46:52.304651 kubelet[2604]: I0113 20:46:52.304281 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kkgfd" podStartSLOduration=27.304262662 podStartE2EDuration="27.304262662s" podCreationTimestamp="2025-01-13 20:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:52.290442 +0000 UTC m=+32.402057097" watchObservedRunningTime="2025-01-13 20:46:52.304262662 +0000 UTC m=+32.415877749" Jan 13 20:46:53.166060 systemd-networkd[1418]: calicdf2a587c2c: Gained IPv6LL Jan 13 20:46:53.270379 kubelet[2604]: E0113 20:46:53.269906 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:53.272720 kubelet[2604]: E0113 20:46:53.272547 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:53.293024 systemd-networkd[1418]: cali798db1d6521: Gained IPv6LL Jan 13 20:46:53.357129 systemd-networkd[1418]: cali75014c3960f: Gained IPv6LL Jan 13 20:46:53.485021 systemd-networkd[1418]: cali01f7b6470c7: Gained IPv6LL Jan 13 20:46:53.543093 kubelet[2604]: I0113 20:46:53.543038 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:53.543555 kubelet[2604]: E0113 20:46:53.543523 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:53.552204 systemd-networkd[1418]: cali8e97887343f: Gained IPv6LL Jan 13 20:46:53.741021 systemd-networkd[1418]: cali32d0da45d3d: Gained IPv6LL Jan 13 20:46:53.816406 kubelet[2604]: I0113 20:46:53.816357 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:53.817425 kubelet[2604]: E0113 20:46:53.817386 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:54.064677 containerd[1502]: time="2025-01-13T20:46:54.064617331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:54.065342 containerd[1502]: time="2025-01-13T20:46:54.065309244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:46:54.066437 containerd[1502]: time="2025-01-13T20:46:54.066398434Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:54.068526 containerd[1502]: time="2025-01-13T20:46:54.068480554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:54.069043 containerd[1502]: time="2025-01-13T20:46:54.069002600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.456017169s" Jan 13 20:46:54.069043 containerd[1502]: time="2025-01-13T20:46:54.069035836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:46:54.069954 containerd[1502]: time="2025-01-13T20:46:54.069932084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:46:54.076495 containerd[1502]: time="2025-01-13T20:46:54.076455334Z" level=info msg="CreateContainer within sandbox \"cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:46:54.090310 containerd[1502]: time="2025-01-13T20:46:54.090271882Z" level=info msg="CreateContainer within sandbox \"cc4a73c02c75a35324e4f7b5bf48607cda8727d2fc216f5fb443d1cfda4ec96b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e25cd401ff7cb75cdc7c4c4e3ba63b848e885ac74a887e6636a3157975ae0f81\"" Jan 13 20:46:54.090763 containerd[1502]: time="2025-01-13T20:46:54.090723989Z" level=info msg="StartContainer for \"e25cd401ff7cb75cdc7c4c4e3ba63b848e885ac74a887e6636a3157975ae0f81\"" Jan 13 20:46:54.122060 systemd[1]: Started cri-containerd-e25cd401ff7cb75cdc7c4c4e3ba63b848e885ac74a887e6636a3157975ae0f81.scope - libcontainer container e25cd401ff7cb75cdc7c4c4e3ba63b848e885ac74a887e6636a3157975ae0f81. Jan 13 20:46:54.164175 containerd[1502]: time="2025-01-13T20:46:54.164133011Z" level=info msg="StartContainer for \"e25cd401ff7cb75cdc7c4c4e3ba63b848e885ac74a887e6636a3157975ae0f81\" returns successfully" Jan 13 20:46:54.275195 kubelet[2604]: E0113 20:46:54.275158 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:54.275966 kubelet[2604]: E0113 20:46:54.275939 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:54.276211 kubelet[2604]: E0113 20:46:54.276180 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:54.350027 kubelet[2604]: I0113 20:46:54.349790 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68d84f998-vczjr" podStartSLOduration=21.8925937 podStartE2EDuration="24.349774509s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:51.612569494 +0000 UTC m=+31.724184581" lastFinishedPulling="2025-01-13 20:46:54.069750303 +0000 UTC m=+34.181365390" observedRunningTime="2025-01-13 20:46:54.286399271 +0000 UTC m=+34.398014359" watchObservedRunningTime="2025-01-13 20:46:54.349774509 +0000 UTC m=+34.461389596" Jan 13 20:46:54.493903 kernel: bpftool[5793]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:46:54.740743 systemd-networkd[1418]: vxlan.calico: Link UP Jan 13 20:46:54.740756 systemd-networkd[1418]: vxlan.calico: Gained carrier Jan 13 20:46:55.279688 kubelet[2604]: E0113 20:46:55.279632 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:55.350442 containerd[1502]: time="2025-01-13T20:46:55.350371123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.351565 containerd[1502]: time="2025-01-13T20:46:55.351497314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:46:55.353428 containerd[1502]: time="2025-01-13T20:46:55.353364525Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.355567 containerd[1502]: time="2025-01-13T20:46:55.355518832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:55.356339 containerd[1502]: time="2025-01-13T20:46:55.356283939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.28632415s" Jan 13 20:46:55.356339 containerd[1502]: time="2025-01-13T20:46:55.356324198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:46:55.357906 containerd[1502]: time="2025-01-13T20:46:55.357784592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:55.358859 containerd[1502]: time="2025-01-13T20:46:55.358817018Z" level=info msg="CreateContainer within sandbox \"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:46:55.401161 containerd[1502]: time="2025-01-13T20:46:55.401108501Z" level=info msg="CreateContainer within sandbox \"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"40deb23dc5ad295f8f3180136bef862e71cd39207723f35058d96f34ea382a0f\"" Jan 13 20:46:55.401909 containerd[1502]: time="2025-01-13T20:46:55.401809771Z" level=info msg="StartContainer for \"40deb23dc5ad295f8f3180136bef862e71cd39207723f35058d96f34ea382a0f\"" Jan 13 20:46:55.435027 systemd[1]: Started cri-containerd-40deb23dc5ad295f8f3180136bef862e71cd39207723f35058d96f34ea382a0f.scope - libcontainer container 40deb23dc5ad295f8f3180136bef862e71cd39207723f35058d96f34ea382a0f. Jan 13 20:46:55.470443 containerd[1502]: time="2025-01-13T20:46:55.470399013Z" level=info msg="StartContainer for \"40deb23dc5ad295f8f3180136bef862e71cd39207723f35058d96f34ea382a0f\" returns successfully" Jan 13 20:46:56.243409 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:49070.service - OpenSSH per-connection server daemon (10.0.0.1:49070). Jan 13 20:46:56.300474 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 49070 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:56.302626 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:56.307594 systemd-logind[1488]: New session 11 of user core. Jan 13 20:46:56.317049 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:46:56.365373 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Jan 13 20:46:56.446130 sshd[5914]: Connection closed by 10.0.0.1 port 49070 Jan 13 20:46:56.446561 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:56.451313 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:49070.service: Deactivated successfully. Jan 13 20:46:56.453734 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:46:56.454508 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:46:56.455643 systemd-logind[1488]: Removed session 11. Jan 13 20:46:58.284037 containerd[1502]: time="2025-01-13T20:46:58.283986688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:58.284851 containerd[1502]: time="2025-01-13T20:46:58.284787330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:46:58.286170 containerd[1502]: time="2025-01-13T20:46:58.286134792Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:58.288778 containerd[1502]: time="2025-01-13T20:46:58.288756603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:58.289360 containerd[1502]: time="2025-01-13T20:46:58.289335978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.931509081s" Jan 13 20:46:58.289396 containerd[1502]: time="2025-01-13T20:46:58.289358071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:58.290557 containerd[1502]: time="2025-01-13T20:46:58.290365192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:58.291541 containerd[1502]: time="2025-01-13T20:46:58.291509883Z" level=info msg="CreateContainer within sandbox \"01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:58.305353 containerd[1502]: time="2025-01-13T20:46:58.305308607Z" level=info msg="CreateContainer within sandbox \"01690453d386c434cd739852c9f8b8b50e6e0c5c3329ac366e047361b2e54182\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5\"" Jan 13 20:46:58.305895 containerd[1502]: time="2025-01-13T20:46:58.305856098Z" level=info msg="StartContainer for \"0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5\"" Jan 13 20:46:58.333866 systemd[1]: run-containerd-runc-k8s.io-0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5-runc.juHIkB.mount: Deactivated successfully. Jan 13 20:46:58.343018 systemd[1]: Started cri-containerd-0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5.scope - libcontainer container 0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5. Jan 13 20:46:58.384526 containerd[1502]: time="2025-01-13T20:46:58.384455958Z" level=info msg="StartContainer for \"0c98956f5c37cd293512885812c22f9ea574e6b062e9c7fd8b3b2c0f3bf976d5\" returns successfully" Jan 13 20:46:58.664073 containerd[1502]: time="2025-01-13T20:46:58.663905903Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:58.664974 containerd[1502]: time="2025-01-13T20:46:58.664884897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:46:58.667234 containerd[1502]: time="2025-01-13T20:46:58.667191695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 376.781596ms" Jan 13 20:46:58.667234 containerd[1502]: time="2025-01-13T20:46:58.667219530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:58.668428 containerd[1502]: time="2025-01-13T20:46:58.668397427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:46:58.669213 containerd[1502]: time="2025-01-13T20:46:58.669185645Z" level=info msg="CreateContainer within sandbox \"2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:58.696169 containerd[1502]: time="2025-01-13T20:46:58.691479714Z" level=info msg="CreateContainer within sandbox \"2047cd4dfb6bcb7a9f57e7bdfda5e6d14fc375fdd62d70befa868fc1154bb2e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fa646df29c4c1f5369e9cf339dbf7b32a0ff8a95ffa5ca85c4b68c8223e10961\"" Jan 13 20:46:58.698306 containerd[1502]: time="2025-01-13T20:46:58.698270642Z" level=info msg="StartContainer for \"fa646df29c4c1f5369e9cf339dbf7b32a0ff8a95ffa5ca85c4b68c8223e10961\"" Jan 13 20:46:58.746059 systemd[1]: Started cri-containerd-fa646df29c4c1f5369e9cf339dbf7b32a0ff8a95ffa5ca85c4b68c8223e10961.scope - libcontainer container fa646df29c4c1f5369e9cf339dbf7b32a0ff8a95ffa5ca85c4b68c8223e10961. Jan 13 20:46:58.805666 containerd[1502]: time="2025-01-13T20:46:58.805587291Z" level=info msg="StartContainer for \"fa646df29c4c1f5369e9cf339dbf7b32a0ff8a95ffa5ca85c4b68c8223e10961\" returns successfully" Jan 13 20:46:59.306379 kubelet[2604]: I0113 20:46:59.306314 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5848c99678-rzkmh" podStartSLOduration=22.791198323 podStartE2EDuration="29.306295122s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:51.775119187 +0000 UTC m=+31.886734284" lastFinishedPulling="2025-01-13 20:46:58.290216006 +0000 UTC m=+38.401831083" observedRunningTime="2025-01-13 20:46:59.304412716 +0000 UTC m=+39.416027803" watchObservedRunningTime="2025-01-13 20:46:59.306295122 +0000 UTC m=+39.417910199" Jan 13 20:47:00.041696 containerd[1502]: time="2025-01-13T20:47:00.041642128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:00.042477 containerd[1502]: time="2025-01-13T20:47:00.042415423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:47:00.043541 containerd[1502]: time="2025-01-13T20:47:00.043503079Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:00.045803 containerd[1502]: time="2025-01-13T20:47:00.045760451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:47:00.046408 containerd[1502]: time="2025-01-13T20:47:00.046372969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.377944752s" Jan 13 20:47:00.046444 containerd[1502]: time="2025-01-13T20:47:00.046417959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:47:00.048729 containerd[1502]: time="2025-01-13T20:47:00.048702545Z" level=info msg="CreateContainer within sandbox \"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:47:00.062186 containerd[1502]: time="2025-01-13T20:47:00.062137758Z" level=info msg="CreateContainer within sandbox \"0a1ba1802ff2d3c8ed12c5f80c8ce2cbeaa93e2e9868608f744e46eee9bebb2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"19273154b90ab106895fd5249cd46a849c1c4338bceb704071bb438c0f5507cb\"" Jan 13 20:47:00.062896 containerd[1502]: time="2025-01-13T20:47:00.062662583Z" level=info msg="StartContainer for \"19273154b90ab106895fd5249cd46a849c1c4338bceb704071bb438c0f5507cb\"" Jan 13 20:47:00.101016 systemd[1]: Started cri-containerd-19273154b90ab106895fd5249cd46a849c1c4338bceb704071bb438c0f5507cb.scope - libcontainer container 19273154b90ab106895fd5249cd46a849c1c4338bceb704071bb438c0f5507cb. Jan 13 20:47:00.351361 containerd[1502]: time="2025-01-13T20:47:00.351180490Z" level=info msg="StartContainer for \"19273154b90ab106895fd5249cd46a849c1c4338bceb704071bb438c0f5507cb\" returns successfully" Jan 13 20:47:00.356171 kubelet[2604]: I0113 20:47:00.355891 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:47:00.356171 kubelet[2604]: I0113 20:47:00.355931 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:47:00.368126 kubelet[2604]: I0113 20:47:00.367059 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5848c99678-gl2ks" podStartSLOduration=23.908995074 podStartE2EDuration="30.367042238s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:52.209817861 +0000 UTC m=+32.321432948" lastFinishedPulling="2025-01-13 20:46:58.667865025 +0000 UTC m=+38.779480112" observedRunningTime="2025-01-13 20:46:59.31825319 +0000 UTC m=+39.429868277" watchObservedRunningTime="2025-01-13 20:47:00.367042238 +0000 UTC m=+40.478657326" Jan 13 20:47:01.022639 kubelet[2604]: I0113 20:47:01.022593 2604 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:47:01.022639 kubelet[2604]: I0113 20:47:01.022634 2604 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:47:01.462229 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:54340.service - OpenSSH per-connection server daemon (10.0.0.1:54340). Jan 13 20:47:01.520996 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:01.523385 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:01.528385 systemd-logind[1488]: New session 12 of user core. Jan 13 20:47:01.538009 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:47:01.663653 sshd[6075]: Connection closed by 10.0.0.1 port 54340 Jan 13 20:47:01.664015 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:01.668234 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:54340.service: Deactivated successfully. Jan 13 20:47:01.670433 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:47:01.671054 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:47:01.672007 systemd-logind[1488]: Removed session 12. Jan 13 20:47:06.677664 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:54354.service - OpenSSH per-connection server daemon (10.0.0.1:54354). Jan 13 20:47:06.721017 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 54354 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:06.722502 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:06.726814 systemd-logind[1488]: New session 13 of user core. Jan 13 20:47:06.741208 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:47:06.871083 sshd[6098]: Connection closed by 10.0.0.1 port 54354 Jan 13 20:47:06.871566 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:06.880385 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:54354.service: Deactivated successfully. Jan 13 20:47:06.882691 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:47:06.884966 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:47:06.895128 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Jan 13 20:47:06.896166 systemd-logind[1488]: Removed session 13. Jan 13 20:47:06.934357 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:06.935808 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:06.940197 systemd-logind[1488]: New session 14 of user core. Jan 13 20:47:06.951016 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:47:07.114068 sshd[6115]: Connection closed by 10.0.0.1 port 54362 Jan 13 20:47:07.114750 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:07.127186 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:54362.service: Deactivated successfully. Jan 13 20:47:07.129130 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:47:07.130852 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:47:07.137117 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:54378.service - OpenSSH per-connection server daemon (10.0.0.1:54378). Jan 13 20:47:07.138509 systemd-logind[1488]: Removed session 14. Jan 13 20:47:07.180349 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 54378 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:07.182124 sshd-session[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:07.186605 systemd-logind[1488]: New session 15 of user core. Jan 13 20:47:07.197082 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:47:07.322519 sshd[6127]: Connection closed by 10.0.0.1 port 54378 Jan 13 20:47:07.322940 sshd-session[6125]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:07.327119 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:54378.service: Deactivated successfully. Jan 13 20:47:07.329731 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:47:07.330606 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:47:07.331867 systemd-logind[1488]: Removed session 15. Jan 13 20:47:10.326889 kubelet[2604]: I0113 20:47:10.326848 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:47:10.341800 kubelet[2604]: I0113 20:47:10.341573 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4zbbn" podStartSLOduration=31.958917293 podStartE2EDuration="40.341556804s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:51.664473339 +0000 UTC m=+31.776088426" lastFinishedPulling="2025-01-13 20:47:00.04711285 +0000 UTC m=+40.158727937" observedRunningTime="2025-01-13 20:47:00.367672311 +0000 UTC m=+40.479287398" watchObservedRunningTime="2025-01-13 20:47:10.341556804 +0000 UTC m=+50.453171891" Jan 13 20:47:12.334988 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:39094.service - OpenSSH per-connection server daemon (10.0.0.1:39094). Jan 13 20:47:12.380073 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 39094 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:12.381571 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:12.385399 systemd-logind[1488]: New session 16 of user core. Jan 13 20:47:12.396996 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:47:12.504720 sshd[6167]: Connection closed by 10.0.0.1 port 39094 Jan 13 20:47:12.505072 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:12.509535 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:39094.service: Deactivated successfully. Jan 13 20:47:12.511373 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:47:12.512124 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:47:12.512942 systemd-logind[1488]: Removed session 16. Jan 13 20:47:17.515698 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:39100.service - OpenSSH per-connection server daemon (10.0.0.1:39100). Jan 13 20:47:17.558364 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 39100 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:17.559975 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:17.563826 systemd-logind[1488]: New session 17 of user core. Jan 13 20:47:17.573153 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:47:17.686377 sshd[6208]: Connection closed by 10.0.0.1 port 39100 Jan 13 20:47:17.686834 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:17.699808 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:39100.service: Deactivated successfully. Jan 13 20:47:17.701770 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:47:17.703425 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:47:17.709134 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:39104.service - OpenSSH per-connection server daemon (10.0.0.1:39104). Jan 13 20:47:17.710265 systemd-logind[1488]: Removed session 17. Jan 13 20:47:17.746991 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 39104 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:17.748560 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:17.752516 systemd-logind[1488]: New session 18 of user core. Jan 13 20:47:17.762104 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:47:17.945727 sshd[6222]: Connection closed by 10.0.0.1 port 39104 Jan 13 20:47:17.946106 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:17.954546 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:39104.service: Deactivated successfully. Jan 13 20:47:17.956222 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:47:17.958003 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:47:17.963156 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Jan 13 20:47:17.964600 systemd-logind[1488]: Removed session 18. Jan 13 20:47:18.004279 sshd[6232]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:18.006291 sshd-session[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:18.010080 systemd-logind[1488]: New session 19 of user core. Jan 13 20:47:18.021008 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:47:19.631530 sshd[6234]: Connection closed by 10.0.0.1 port 39106 Jan 13 20:47:19.632050 sshd-session[6232]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:19.643749 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:39106.service: Deactivated successfully. Jan 13 20:47:19.646025 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:47:19.649584 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:47:19.659336 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:39108.service - OpenSSH per-connection server daemon (10.0.0.1:39108). Jan 13 20:47:19.660486 systemd-logind[1488]: Removed session 19. Jan 13 20:47:19.697807 sshd[6252]: Accepted publickey for core from 10.0.0.1 port 39108 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:19.699464 sshd-session[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:19.703486 systemd-logind[1488]: New session 20 of user core. Jan 13 20:47:19.712997 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:47:19.928724 sshd[6255]: Connection closed by 10.0.0.1 port 39108 Jan 13 20:47:19.929469 sshd-session[6252]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:19.938836 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:39108.service: Deactivated successfully. Jan 13 20:47:19.941240 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:47:19.942659 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:47:19.949189 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:39120.service - OpenSSH per-connection server daemon (10.0.0.1:39120). Jan 13 20:47:19.950355 systemd-logind[1488]: Removed session 20. Jan 13 20:47:19.956052 containerd[1502]: time="2025-01-13T20:47:19.956010991Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:47:19.956413 containerd[1502]: time="2025-01-13T20:47:19.956137154Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:47:19.956413 containerd[1502]: time="2025-01-13T20:47:19.956183811Z" level=info msg="StopPodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:47:19.958929 containerd[1502]: time="2025-01-13T20:47:19.957329633Z" level=info msg="RemovePodSandbox for \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:47:19.971674 containerd[1502]: time="2025-01-13T20:47:19.971612817Z" level=info msg="Forcibly stopping sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\"" Jan 13 20:47:19.971811 containerd[1502]: time="2025-01-13T20:47:19.971751143Z" level=info msg="TearDown network for sandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" successfully" Jan 13 20:47:19.987344 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:19.988784 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:19.989451 containerd[1502]: time="2025-01-13T20:47:19.989394322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:19.989667 containerd[1502]: time="2025-01-13T20:47:19.989643684Z" level=info msg="RemovePodSandbox \"b6d5fb599849f5df770672f29a3e9ce6f9cb6fba18fbadfd695a25030bd721f0\" returns successfully" Jan 13 20:47:19.990224 containerd[1502]: time="2025-01-13T20:47:19.990189305Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:47:19.990355 containerd[1502]: time="2025-01-13T20:47:19.990306292Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:47:19.990355 containerd[1502]: time="2025-01-13T20:47:19.990349011Z" level=info msg="StopPodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:47:19.990986 containerd[1502]: time="2025-01-13T20:47:19.990640801Z" level=info msg="RemovePodSandbox for \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:47:19.990986 containerd[1502]: time="2025-01-13T20:47:19.990678200Z" level=info msg="Forcibly stopping sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\"" Jan 13 20:47:19.990986 containerd[1502]: time="2025-01-13T20:47:19.990777555Z" level=info msg="TearDown network for sandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" successfully" Jan 13 20:47:19.993775 systemd-logind[1488]: New session 21 of user core. Jan 13 20:47:19.995451 containerd[1502]: time="2025-01-13T20:47:19.995419125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:19.995509 containerd[1502]: time="2025-01-13T20:47:19.995472724Z" level=info msg="RemovePodSandbox \"aadf4fc52e482a980d32cd60e9efd1d984426717c8c30ae709aec662b025a8a7\" returns successfully" Jan 13 20:47:19.995885 containerd[1502]: time="2025-01-13T20:47:19.995844342Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:47:19.995885 containerd[1502]: time="2025-01-13T20:47:19.995990333Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:47:19.995885 containerd[1502]: time="2025-01-13T20:47:19.996001844Z" level=info msg="StopPodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:47:19.996221 containerd[1502]: time="2025-01-13T20:47:19.996200101Z" level=info msg="RemovePodSandbox for \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:47:19.996267 containerd[1502]: time="2025-01-13T20:47:19.996222313Z" level=info msg="Forcibly stopping sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\"" Jan 13 20:47:19.996328 containerd[1502]: time="2025-01-13T20:47:19.996291310Z" level=info msg="TearDown network for sandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" successfully" Jan 13 20:47:20.000014 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:47:20.000222 containerd[1502]: time="2025-01-13T20:47:20.000198549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.000620 containerd[1502]: time="2025-01-13T20:47:20.000567653Z" level=info msg="RemovePodSandbox \"6e56cdf48f1d4c46f307e0b26b203a3e939459adf0ecf69632d581e8ffad6846\" returns successfully" Jan 13 20:47:20.000966 containerd[1502]: time="2025-01-13T20:47:20.000934533Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:47:20.001066 containerd[1502]: time="2025-01-13T20:47:20.001047874Z" level=info msg="TearDown network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" successfully" Jan 13 20:47:20.001066 containerd[1502]: time="2025-01-13T20:47:20.001063302Z" level=info msg="StopPodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" returns successfully" Jan 13 20:47:20.002315 containerd[1502]: time="2025-01-13T20:47:20.001295714Z" level=info msg="RemovePodSandbox for \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:47:20.002315 containerd[1502]: time="2025-01-13T20:47:20.001322212Z" level=info msg="Forcibly stopping sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\"" Jan 13 20:47:20.002315 containerd[1502]: time="2025-01-13T20:47:20.001402933Z" level=info msg="TearDown network for sandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" successfully" Jan 13 20:47:20.005429 containerd[1502]: time="2025-01-13T20:47:20.005395370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.005473 containerd[1502]: time="2025-01-13T20:47:20.005454149Z" level=info msg="RemovePodSandbox \"a26ffc6d007024b3b6bf6044845bf4ea2d8442500546a23bd3b9af44147e4e11\" returns successfully" Jan 13 20:47:20.006023 containerd[1502]: time="2025-01-13T20:47:20.006000674Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" Jan 13 20:47:20.006109 containerd[1502]: time="2025-01-13T20:47:20.006090720Z" level=info msg="TearDown network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" successfully" Jan 13 20:47:20.006109 containerd[1502]: time="2025-01-13T20:47:20.006104095Z" level=info msg="StopPodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" returns successfully" Jan 13 20:47:20.006308 containerd[1502]: time="2025-01-13T20:47:20.006277086Z" level=info msg="RemovePodSandbox for \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" Jan 13 20:47:20.006308 containerd[1502]: time="2025-01-13T20:47:20.006302353Z" level=info msg="Forcibly stopping sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\"" Jan 13 20:47:20.006403 containerd[1502]: time="2025-01-13T20:47:20.006368796Z" level=info msg="TearDown network for sandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" successfully" Jan 13 20:47:20.010079 containerd[1502]: time="2025-01-13T20:47:20.010054986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.010142 containerd[1502]: time="2025-01-13T20:47:20.010087526Z" level=info msg="RemovePodSandbox \"81aa376f4a1f4998eac8e4e075884fe5857f79667069e07e1b4b9fa682a2e3de\" returns successfully" Jan 13 20:47:20.010291 containerd[1502]: time="2025-01-13T20:47:20.010268271Z" level=info msg="StopPodSandbox for \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\"" Jan 13 20:47:20.010377 containerd[1502]: time="2025-01-13T20:47:20.010343170Z" level=info msg="TearDown network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" successfully" Jan 13 20:47:20.010377 containerd[1502]: time="2025-01-13T20:47:20.010356154Z" level=info msg="StopPodSandbox for \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" returns successfully" Jan 13 20:47:20.010587 containerd[1502]: time="2025-01-13T20:47:20.010561095Z" level=info msg="RemovePodSandbox for \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\"" Jan 13 20:47:20.010587 containerd[1502]: time="2025-01-13T20:47:20.010581062Z" level=info msg="Forcibly stopping sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\"" Jan 13 20:47:20.010657 containerd[1502]: time="2025-01-13T20:47:20.010641333Z" level=info msg="TearDown network for sandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" successfully" Jan 13 20:47:20.016473 containerd[1502]: time="2025-01-13T20:47:20.016441304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.016473 containerd[1502]: time="2025-01-13T20:47:20.016475358Z" level=info msg="RemovePodSandbox \"afc4b052e198bbe0dd70cd0bb76bf9b6337e8e935a1317b36c6da075c8e4809f\" returns successfully" Jan 13 20:47:20.016867 containerd[1502]: time="2025-01-13T20:47:20.016846096Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:47:20.016955 containerd[1502]: time="2025-01-13T20:47:20.016939489Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:47:20.016955 containerd[1502]: time="2025-01-13T20:47:20.016952723Z" level=info msg="StopPodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:47:20.017269 containerd[1502]: time="2025-01-13T20:47:20.017229958Z" level=info msg="RemovePodSandbox for \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:47:20.017332 containerd[1502]: time="2025-01-13T20:47:20.017269231Z" level=info msg="Forcibly stopping sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\"" Jan 13 20:47:20.017416 containerd[1502]: time="2025-01-13T20:47:20.017374586Z" level=info msg="TearDown network for sandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" successfully" Jan 13 20:47:20.022127 containerd[1502]: time="2025-01-13T20:47:20.022100975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.022195 containerd[1502]: time="2025-01-13T20:47:20.022151279Z" level=info msg="RemovePodSandbox \"02df65b4866fc3254453961ee74ccec449656241ce73598299eb8d747d4cc2bb\" returns successfully" Jan 13 20:47:20.022456 containerd[1502]: time="2025-01-13T20:47:20.022431108Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:47:20.022530 containerd[1502]: time="2025-01-13T20:47:20.022512349Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:47:20.022530 containerd[1502]: time="2025-01-13T20:47:20.022524501Z" level=info msg="StopPodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:47:20.022808 containerd[1502]: time="2025-01-13T20:47:20.022776539Z" level=info msg="RemovePodSandbox for \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:47:20.022808 containerd[1502]: time="2025-01-13T20:47:20.022799662Z" level=info msg="Forcibly stopping sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\"" Jan 13 20:47:20.022950 containerd[1502]: time="2025-01-13T20:47:20.022863520Z" level=info msg="TearDown network for sandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" successfully" Jan 13 20:47:20.028440 containerd[1502]: time="2025-01-13T20:47:20.028397859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.028440 containerd[1502]: time="2025-01-13T20:47:20.028448273Z" level=info msg="RemovePodSandbox \"52a0297e7ef31c0ef86d1e0d7829c1177c60307703a9b2da576f74b4187c0527\" returns successfully" Jan 13 20:47:20.028798 containerd[1502]: time="2025-01-13T20:47:20.028729664Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:47:20.028971 containerd[1502]: time="2025-01-13T20:47:20.028824069Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:47:20.028971 containerd[1502]: time="2025-01-13T20:47:20.028835491Z" level=info msg="StopPodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:47:20.029084 containerd[1502]: time="2025-01-13T20:47:20.029049458Z" level=info msg="RemovePodSandbox for \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:47:20.029084 containerd[1502]: time="2025-01-13T20:47:20.029082078Z" level=info msg="Forcibly stopping sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\"" Jan 13 20:47:20.029172 containerd[1502]: time="2025-01-13T20:47:20.029142390Z" level=info msg="TearDown network for sandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" successfully" Jan 13 20:47:20.034588 containerd[1502]: time="2025-01-13T20:47:20.034552037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.034588 containerd[1502]: time="2025-01-13T20:47:20.034588284Z" level=info msg="RemovePodSandbox \"c8b3ad45c4fe854308004f71a7a60d5df91bf4dae58fe631947acf17e883e228\" returns successfully" Jan 13 20:47:20.034964 containerd[1502]: time="2025-01-13T20:47:20.034939005Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:47:20.035060 containerd[1502]: time="2025-01-13T20:47:20.035037788Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:47:20.035060 containerd[1502]: time="2025-01-13T20:47:20.035053578Z" level=info msg="StopPodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:47:20.035479 containerd[1502]: time="2025-01-13T20:47:20.035441698Z" level=info msg="RemovePodSandbox for \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:47:20.035528 containerd[1502]: time="2025-01-13T20:47:20.035487733Z" level=info msg="Forcibly stopping sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\"" Jan 13 20:47:20.035672 containerd[1502]: time="2025-01-13T20:47:20.035604309Z" level=info msg="TearDown network for sandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" successfully" Jan 13 20:47:20.040792 containerd[1502]: time="2025-01-13T20:47:20.040749386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.040852 containerd[1502]: time="2025-01-13T20:47:20.040822402Z" level=info msg="RemovePodSandbox \"6a41326b65a917a1cbf33bb1afc1cc17076d215107b30e08ba5d04d00f9ef621\" returns successfully" Jan 13 20:47:20.041157 containerd[1502]: time="2025-01-13T20:47:20.041135132Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:47:20.041354 containerd[1502]: time="2025-01-13T20:47:20.041323431Z" level=info msg="TearDown network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" successfully" Jan 13 20:47:20.041354 containerd[1502]: time="2025-01-13T20:47:20.041339722Z" level=info msg="StopPodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" returns successfully" Jan 13 20:47:20.041777 containerd[1502]: time="2025-01-13T20:47:20.041649505Z" level=info msg="RemovePodSandbox for \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:47:20.041777 containerd[1502]: time="2025-01-13T20:47:20.041684371Z" level=info msg="Forcibly stopping sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\"" Jan 13 20:47:20.041777 containerd[1502]: time="2025-01-13T20:47:20.041743721Z" level=info msg="TearDown network for sandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" successfully" Jan 13 20:47:20.047630 containerd[1502]: time="2025-01-13T20:47:20.047588004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.047715 containerd[1502]: time="2025-01-13T20:47:20.047637817Z" level=info msg="RemovePodSandbox \"bac6b78ecf40b3c6186644038d7c1640e286900b80ffbc9137b04896cf604586\" returns successfully" Jan 13 20:47:20.048386 containerd[1502]: time="2025-01-13T20:47:20.047996112Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" Jan 13 20:47:20.048386 containerd[1502]: time="2025-01-13T20:47:20.048127204Z" level=info msg="TearDown network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" successfully" Jan 13 20:47:20.048386 containerd[1502]: time="2025-01-13T20:47:20.048149005Z" level=info msg="StopPodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" returns successfully" Jan 13 20:47:20.048706 containerd[1502]: time="2025-01-13T20:47:20.048674571Z" level=info msg="RemovePodSandbox for \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" Jan 13 20:47:20.048785 containerd[1502]: time="2025-01-13T20:47:20.048710327Z" level=info msg="Forcibly stopping sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\"" Jan 13 20:47:20.048846 containerd[1502]: time="2025-01-13T20:47:20.048805293Z" level=info msg="TearDown network for sandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" successfully" Jan 13 20:47:20.057752 containerd[1502]: time="2025-01-13T20:47:20.057657447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.057752 containerd[1502]: time="2025-01-13T20:47:20.057732947Z" level=info msg="RemovePodSandbox \"186160f19d9f0ce9af47045ea989a153571b08d350eaa8d7133f339e4df83f3d\" returns successfully" Jan 13 20:47:20.057752 containerd[1502]: time="2025-01-13T20:47:20.058352997Z" level=info msg="StopPodSandbox for \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\"" Jan 13 20:47:20.057752 containerd[1502]: time="2025-01-13T20:47:20.058479933Z" level=info msg="TearDown network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" successfully" Jan 13 20:47:20.057752 containerd[1502]: time="2025-01-13T20:47:20.058492978Z" level=info msg="StopPodSandbox for \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" returns successfully" Jan 13 20:47:20.061908 containerd[1502]: time="2025-01-13T20:47:20.059460964Z" level=info msg="RemovePodSandbox for \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\"" Jan 13 20:47:20.061908 containerd[1502]: time="2025-01-13T20:47:20.059491380Z" level=info msg="Forcibly stopping sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\"" Jan 13 20:47:20.061908 containerd[1502]: time="2025-01-13T20:47:20.059574243Z" level=info msg="TearDown network for sandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" successfully" Jan 13 20:47:20.064458 containerd[1502]: time="2025-01-13T20:47:20.064363589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.064588 containerd[1502]: time="2025-01-13T20:47:20.064559994Z" level=info msg="RemovePodSandbox \"63931c6c1814758c59b278ba2aad90b2ec44816dca076cab4859da24a8bae9ee\" returns successfully" Jan 13 20:47:20.065450 containerd[1502]: time="2025-01-13T20:47:20.065406174Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:47:20.065977 containerd[1502]: time="2025-01-13T20:47:20.065957437Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:47:20.066080 containerd[1502]: time="2025-01-13T20:47:20.066064986Z" level=info msg="StopPodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:47:20.066678 containerd[1502]: time="2025-01-13T20:47:20.066658096Z" level=info msg="RemovePodSandbox for \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:47:20.066948 containerd[1502]: time="2025-01-13T20:47:20.066916506Z" level=info msg="Forcibly stopping sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\"" Jan 13 20:47:20.067162 containerd[1502]: time="2025-01-13T20:47:20.067104105Z" level=info msg="TearDown network for sandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" successfully" Jan 13 20:47:20.073103 containerd[1502]: time="2025-01-13T20:47:20.073069592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.073254 containerd[1502]: time="2025-01-13T20:47:20.073239358Z" level=info msg="RemovePodSandbox \"619462d0131f0c93cf87afca5b6066b4b80cd55cc7d20ad871cd7edacd846838\" returns successfully" Jan 13 20:47:20.073736 containerd[1502]: time="2025-01-13T20:47:20.073676579Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:47:20.073810 containerd[1502]: time="2025-01-13T20:47:20.073790740Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:47:20.073810 containerd[1502]: time="2025-01-13T20:47:20.073804356Z" level=info msg="StopPodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:47:20.076299 containerd[1502]: time="2025-01-13T20:47:20.074015257Z" level=info msg="RemovePodSandbox for \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:47:20.076299 containerd[1502]: time="2025-01-13T20:47:20.074036857Z" level=info msg="Forcibly stopping sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\"" Jan 13 20:47:20.076299 containerd[1502]: time="2025-01-13T20:47:20.074099493Z" level=info msg="TearDown network for sandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" successfully" Jan 13 20:47:20.080653 containerd[1502]: time="2025-01-13T20:47:20.080596208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.080904 containerd[1502]: time="2025-01-13T20:47:20.080883731Z" level=info msg="RemovePodSandbox \"e412ccae67d5804079057950b14dd5cfddce1eed64d0c61e3a5d122ec94a7105\" returns successfully" Jan 13 20:47:20.081532 containerd[1502]: time="2025-01-13T20:47:20.081484135Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:47:20.081659 containerd[1502]: time="2025-01-13T20:47:20.081631809Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:47:20.081659 containerd[1502]: time="2025-01-13T20:47:20.081650794Z" level=info msg="StopPodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:47:20.083698 containerd[1502]: time="2025-01-13T20:47:20.082005132Z" level=info msg="RemovePodSandbox for \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:47:20.083698 containerd[1502]: time="2025-01-13T20:47:20.082036790Z" level=info msg="Forcibly stopping sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\"" Jan 13 20:47:20.083698 containerd[1502]: time="2025-01-13T20:47:20.082126287Z" level=info msg="TearDown network for sandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" successfully" Jan 13 20:47:20.087181 containerd[1502]: time="2025-01-13T20:47:20.087129579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.087388 containerd[1502]: time="2025-01-13T20:47:20.087221000Z" level=info msg="RemovePodSandbox \"d3e25de8c3267da67a59b3352b269ae8e027b1400db5b223210399d7fd427385\" returns successfully" Jan 13 20:47:20.087719 containerd[1502]: time="2025-01-13T20:47:20.087690430Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:47:20.088098 containerd[1502]: time="2025-01-13T20:47:20.088066378Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:47:20.088098 containerd[1502]: time="2025-01-13T20:47:20.088087967Z" level=info msg="StopPodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:47:20.088464 containerd[1502]: time="2025-01-13T20:47:20.088433700Z" level=info msg="RemovePodSandbox for \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:47:20.088464 containerd[1502]: time="2025-01-13T20:47:20.088459116Z" level=info msg="Forcibly stopping sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\"" Jan 13 20:47:20.088598 containerd[1502]: time="2025-01-13T20:47:20.088549033Z" level=info msg="TearDown network for sandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" successfully" Jan 13 20:47:20.092662 containerd[1502]: time="2025-01-13T20:47:20.092611731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.092719 containerd[1502]: time="2025-01-13T20:47:20.092679717Z" level=info msg="RemovePodSandbox \"2e8ce36446c75da843846609c210088175998059c4d0530559d256ce626f49f9\" returns successfully" Jan 13 20:47:20.093058 containerd[1502]: time="2025-01-13T20:47:20.093039404Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:47:20.093279 containerd[1502]: time="2025-01-13T20:47:20.093263541Z" level=info msg="TearDown network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" successfully" Jan 13 20:47:20.093331 containerd[1502]: time="2025-01-13T20:47:20.093318944Z" level=info msg="StopPodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" returns successfully" Jan 13 20:47:20.093597 containerd[1502]: time="2025-01-13T20:47:20.093582803Z" level=info msg="RemovePodSandbox for \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:47:20.093785 containerd[1502]: time="2025-01-13T20:47:20.093770060Z" level=info msg="Forcibly stopping sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\"" Jan 13 20:47:20.093949 containerd[1502]: time="2025-01-13T20:47:20.093906173Z" level=info msg="TearDown network for sandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" successfully" Jan 13 20:47:20.099171 containerd[1502]: time="2025-01-13T20:47:20.099123473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.099257 containerd[1502]: time="2025-01-13T20:47:20.099222868Z" level=info msg="RemovePodSandbox \"049f3bf3139f088710252a29e930d2f51ff17880c1f74929107cc1b7612fcf5b\" returns successfully" Jan 13 20:47:20.099974 containerd[1502]: time="2025-01-13T20:47:20.099956007Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" Jan 13 20:47:20.100163 containerd[1502]: time="2025-01-13T20:47:20.100097050Z" level=info msg="TearDown network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" successfully" Jan 13 20:47:20.100414 containerd[1502]: time="2025-01-13T20:47:20.100397657Z" level=info msg="StopPodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" returns successfully" Jan 13 20:47:20.100774 containerd[1502]: time="2025-01-13T20:47:20.100748218Z" level=info msg="RemovePodSandbox for \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" Jan 13 20:47:20.100850 containerd[1502]: time="2025-01-13T20:47:20.100837343Z" level=info msg="Forcibly stopping sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\"" Jan 13 20:47:20.101008 containerd[1502]: time="2025-01-13T20:47:20.100978855Z" level=info msg="TearDown network for sandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" successfully" Jan 13 20:47:20.105430 containerd[1502]: time="2025-01-13T20:47:20.105408444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.105507 containerd[1502]: time="2025-01-13T20:47:20.105495125Z" level=info msg="RemovePodSandbox \"71f3eb78a459923d20fbc833a1e139439c485f12771cc1f30a621a28fd5ce2d3\" returns successfully" Jan 13 20:47:20.105828 containerd[1502]: time="2025-01-13T20:47:20.105812694Z" level=info msg="StopPodSandbox for \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\"" Jan 13 20:47:20.106032 containerd[1502]: time="2025-01-13T20:47:20.106017333Z" level=info msg="TearDown network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" successfully" Jan 13 20:47:20.106092 containerd[1502]: time="2025-01-13T20:47:20.106081363Z" level=info msg="StopPodSandbox for \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" returns successfully" Jan 13 20:47:20.106337 containerd[1502]: time="2025-01-13T20:47:20.106321768Z" level=info msg="RemovePodSandbox for \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\"" Jan 13 20:47:20.106501 containerd[1502]: time="2025-01-13T20:47:20.106485242Z" level=info msg="Forcibly stopping sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\"" Jan 13 20:47:20.106652 containerd[1502]: time="2025-01-13T20:47:20.106626674Z" level=info msg="TearDown network for sandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" successfully" Jan 13 20:47:20.110304 containerd[1502]: time="2025-01-13T20:47:20.110285654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.110418 containerd[1502]: time="2025-01-13T20:47:20.110403822Z" level=info msg="RemovePodSandbox \"26c8c2a6f6909136e4a3e2c0e8bd465939fe848ec353c46fc7229a35f11d5ea6\" returns successfully" Jan 13 20:47:20.110934 containerd[1502]: time="2025-01-13T20:47:20.110887821Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:47:20.111079 containerd[1502]: time="2025-01-13T20:47:20.111051164Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:47:20.111079 containerd[1502]: time="2025-01-13T20:47:20.111071932Z" level=info msg="StopPodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:47:20.111710 containerd[1502]: time="2025-01-13T20:47:20.111424897Z" level=info msg="RemovePodSandbox for \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:47:20.111710 containerd[1502]: time="2025-01-13T20:47:20.111446858Z" level=info msg="Forcibly stopping sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\"" Jan 13 20:47:20.111710 containerd[1502]: time="2025-01-13T20:47:20.111509194Z" level=info msg="TearDown network for sandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" successfully" Jan 13 20:47:20.115058 containerd[1502]: time="2025-01-13T20:47:20.115014417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.115107 containerd[1502]: time="2025-01-13T20:47:20.115078787Z" level=info msg="RemovePodSandbox \"8da4f6525a50643a68e764d39c355c17f0fb932cf4f74f39f1ead80ba497bba6\" returns successfully" Jan 13 20:47:20.115428 containerd[1502]: time="2025-01-13T20:47:20.115386277Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:47:20.115494 containerd[1502]: time="2025-01-13T20:47:20.115467538Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:47:20.115494 containerd[1502]: time="2025-01-13T20:47:20.115477176Z" level=info msg="StopPodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:47:20.115933 containerd[1502]: time="2025-01-13T20:47:20.115696773Z" level=info msg="RemovePodSandbox for \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:47:20.115933 containerd[1502]: time="2025-01-13T20:47:20.115721088Z" level=info msg="Forcibly stopping sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\"" Jan 13 20:47:20.115933 containerd[1502]: time="2025-01-13T20:47:20.115809282Z" level=info msg="TearDown network for sandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" successfully" Jan 13 20:47:20.120302 containerd[1502]: time="2025-01-13T20:47:20.120252676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.120302 containerd[1502]: time="2025-01-13T20:47:20.120294043Z" level=info msg="RemovePodSandbox \"d8013331b00f43b6f8a0d258ec137cd6f0090c284aac701055ef2f2c3698f658\" returns successfully" Jan 13 20:47:20.120919 containerd[1502]: time="2025-01-13T20:47:20.120562090Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:47:20.120919 containerd[1502]: time="2025-01-13T20:47:20.120643341Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:47:20.120919 containerd[1502]: time="2025-01-13T20:47:20.120652458Z" level=info msg="StopPodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:47:20.120919 containerd[1502]: time="2025-01-13T20:47:20.120840516Z" level=info msg="RemovePodSandbox for \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:47:20.120919 containerd[1502]: time="2025-01-13T20:47:20.120857839Z" level=info msg="Forcibly stopping sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\"" Jan 13 20:47:20.121057 containerd[1502]: time="2025-01-13T20:47:20.120930304Z" level=info msg="TearDown network for sandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" successfully" Jan 13 20:47:20.124404 containerd[1502]: time="2025-01-13T20:47:20.124362211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.124404 containerd[1502]: time="2025-01-13T20:47:20.124393009Z" level=info msg="RemovePodSandbox \"445066d97ba9bd4f9ecb5e20c6d72908d29119baca45e0ca10ab1e23f127e28b\" returns successfully" Jan 13 20:47:20.124906 containerd[1502]: time="2025-01-13T20:47:20.124764657Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:47:20.125006 containerd[1502]: time="2025-01-13T20:47:20.124991278Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:47:20.125075 containerd[1502]: time="2025-01-13T20:47:20.125062390Z" level=info msg="StopPodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:47:20.125424 containerd[1502]: time="2025-01-13T20:47:20.125371764Z" level=info msg="RemovePodSandbox for \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:47:20.125481 containerd[1502]: time="2025-01-13T20:47:20.125426255Z" level=info msg="Forcibly stopping sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\"" Jan 13 20:47:20.125590 containerd[1502]: time="2025-01-13T20:47:20.125506274Z" level=info msg="TearDown network for sandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" successfully" Jan 13 20:47:20.129490 containerd[1502]: time="2025-01-13T20:47:20.129452826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.129578 containerd[1502]: time="2025-01-13T20:47:20.129526293Z" level=info msg="RemovePodSandbox \"7f91e320ee6cf62a64a52764d9393cdfec2ba40d8110fe1de54f80d1eb7f8188\" returns successfully" Jan 13 20:47:20.131635 containerd[1502]: time="2025-01-13T20:47:20.129976157Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:47:20.131635 containerd[1502]: time="2025-01-13T20:47:20.130132066Z" level=info msg="TearDown network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" successfully" Jan 13 20:47:20.131635 containerd[1502]: time="2025-01-13T20:47:20.130147335Z" level=info msg="StopPodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" returns successfully" Jan 13 20:47:20.131978 containerd[1502]: time="2025-01-13T20:47:20.131951752Z" level=info msg="RemovePodSandbox for \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:47:20.132345 containerd[1502]: time="2025-01-13T20:47:20.132247351Z" level=info msg="Forcibly stopping sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\"" Jan 13 20:47:20.132546 containerd[1502]: time="2025-01-13T20:47:20.132413239Z" level=info msg="TearDown network for sandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" successfully" Jan 13 20:47:20.132577 sshd[6269]: Connection closed by 10.0.0.1 port 39120 Jan 13 20:47:20.132949 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:20.137620 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:39120.service: Deactivated successfully. Jan 13 20:47:20.138945 containerd[1502]: time="2025-01-13T20:47:20.138910353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.139233 containerd[1502]: time="2025-01-13T20:47:20.139114903Z" level=info msg="RemovePodSandbox \"10ba7d0ba185a0ce46ef4c57a0dae7bb8c431cc76a2dc662891aa6d8e3244397\" returns successfully" Jan 13 20:47:20.139548 containerd[1502]: time="2025-01-13T20:47:20.139525224Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" Jan 13 20:47:20.139706 containerd[1502]: time="2025-01-13T20:47:20.139682987Z" level=info msg="TearDown network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" successfully" Jan 13 20:47:20.139706 containerd[1502]: time="2025-01-13T20:47:20.139701972Z" level=info msg="StopPodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" returns successfully" Jan 13 20:47:20.139985 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:47:20.140195 containerd[1502]: time="2025-01-13T20:47:20.140163238Z" level=info msg="RemovePodSandbox for \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" Jan 13 20:47:20.140195 containerd[1502]: time="2025-01-13T20:47:20.140193905Z" level=info msg="Forcibly stopping sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\"" Jan 13 20:47:20.140339 containerd[1502]: time="2025-01-13T20:47:20.140301074Z" level=info msg="TearDown network for sandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" successfully" Jan 13 20:47:20.140847 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:47:20.141770 systemd-logind[1488]: Removed session 21. Jan 13 20:47:20.144102 containerd[1502]: time="2025-01-13T20:47:20.144058916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.144102 containerd[1502]: time="2025-01-13T20:47:20.144100604Z" level=info msg="RemovePodSandbox \"dcd50c0072cfc106c07324e09850c21ea88bd9518280b255b128b3c1233fcc99\" returns successfully" Jan 13 20:47:20.148019 containerd[1502]: time="2025-01-13T20:47:20.144350207Z" level=info msg="StopPodSandbox for \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\"" Jan 13 20:47:20.148241 containerd[1502]: time="2025-01-13T20:47:20.148090276Z" level=info msg="TearDown network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" successfully" Jan 13 20:47:20.148241 containerd[1502]: time="2025-01-13T20:47:20.148128146Z" level=info msg="StopPodSandbox for \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" returns successfully" Jan 13 20:47:20.148408 containerd[1502]: time="2025-01-13T20:47:20.148379552Z" level=info msg="RemovePodSandbox for \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\"" Jan 13 20:47:20.148441 containerd[1502]: time="2025-01-13T20:47:20.148415660Z" level=info msg="Forcibly stopping sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\"" Jan 13 20:47:20.148542 containerd[1502]: time="2025-01-13T20:47:20.148497471Z" level=info msg="TearDown network for sandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" successfully" Jan 13 20:47:20.152192 containerd[1502]: time="2025-01-13T20:47:20.152152743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.152246 containerd[1502]: time="2025-01-13T20:47:20.152196275Z" level=info msg="RemovePodSandbox \"dd1dce1b1c772dcd8a5df665aba8455f4ff3c8d47d842bf2e2557d81d4a97f07\" returns successfully" Jan 13 20:47:20.152499 containerd[1502]: time="2025-01-13T20:47:20.152460043Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:47:20.152571 containerd[1502]: time="2025-01-13T20:47:20.152552956Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:47:20.152595 containerd[1502]: time="2025-01-13T20:47:20.152570208Z" level=info msg="StopPodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:47:20.152904 containerd[1502]: time="2025-01-13T20:47:20.152819410Z" level=info msg="RemovePodSandbox for \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:47:20.152904 containerd[1502]: time="2025-01-13T20:47:20.152846521Z" level=info msg="Forcibly stopping sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\"" Jan 13 20:47:20.152989 containerd[1502]: time="2025-01-13T20:47:20.152950394Z" level=info msg="TearDown network for sandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" successfully" Jan 13 20:47:20.157038 containerd[1502]: time="2025-01-13T20:47:20.157005267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.157092 containerd[1502]: time="2025-01-13T20:47:20.157055600Z" level=info msg="RemovePodSandbox \"e5372d1ba757c74fecf3616774cd134421ed3dc1e865f840160785aab947a8cd\" returns successfully" Jan 13 20:47:20.157350 containerd[1502]: time="2025-01-13T20:47:20.157320562Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:47:20.157422 containerd[1502]: time="2025-01-13T20:47:20.157402804Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:47:20.157422 containerd[1502]: time="2025-01-13T20:47:20.157416189Z" level=info msg="StopPodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:47:20.157631 containerd[1502]: time="2025-01-13T20:47:20.157608356Z" level=info msg="RemovePodSandbox for \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:47:20.157682 containerd[1502]: time="2025-01-13T20:47:20.157633583Z" level=info msg="Forcibly stopping sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\"" Jan 13 20:47:20.157736 containerd[1502]: time="2025-01-13T20:47:20.157703251Z" level=info msg="TearDown network for sandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" successfully" Jan 13 20:47:20.161227 containerd[1502]: time="2025-01-13T20:47:20.161193428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.161227 containerd[1502]: time="2025-01-13T20:47:20.161227761Z" level=info msg="RemovePodSandbox \"d8bad87f4fdbfaeed338d58dce2dddfe8d26b90dd4fda1c5746f6075580cf1e1\" returns successfully" Jan 13 20:47:20.161468 containerd[1502]: time="2025-01-13T20:47:20.161443982Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:47:20.161540 containerd[1502]: time="2025-01-13T20:47:20.161525042Z" level=info msg="TearDown network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" successfully" Jan 13 20:47:20.161540 containerd[1502]: time="2025-01-13T20:47:20.161536564Z" level=info msg="StopPodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" returns successfully" Jan 13 20:47:20.161882 containerd[1502]: time="2025-01-13T20:47:20.161838654Z" level=info msg="RemovePodSandbox for \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:47:20.161919 containerd[1502]: time="2025-01-13T20:47:20.161894178Z" level=info msg="Forcibly stopping sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\"" Jan 13 20:47:20.162030 containerd[1502]: time="2025-01-13T20:47:20.161979766Z" level=info msg="TearDown network for sandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" successfully" Jan 13 20:47:20.167215 containerd[1502]: time="2025-01-13T20:47:20.167164316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.167257 containerd[1502]: time="2025-01-13T20:47:20.167234696Z" level=info msg="RemovePodSandbox \"c8dfc03781d9025bc1cfcd2942c359b4c5c7d66eb223d79e4aee753377139916\" returns successfully" Jan 13 20:47:20.167700 containerd[1502]: time="2025-01-13T20:47:20.167654465Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" Jan 13 20:47:20.167836 containerd[1502]: time="2025-01-13T20:47:20.167817238Z" level=info msg="TearDown network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" successfully" Jan 13 20:47:20.167836 containerd[1502]: time="2025-01-13T20:47:20.167833497Z" level=info msg="StopPodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" returns successfully" Jan 13 20:47:20.168160 containerd[1502]: time="2025-01-13T20:47:20.168136921Z" level=info msg="RemovePodSandbox for \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" Jan 13 20:47:20.168160 containerd[1502]: time="2025-01-13T20:47:20.168159973Z" level=info msg="Forcibly stopping sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\"" Jan 13 20:47:20.168271 containerd[1502]: time="2025-01-13T20:47:20.168222249Z" level=info msg="TearDown network for sandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" successfully" Jan 13 20:47:20.172140 containerd[1502]: time="2025-01-13T20:47:20.172105093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.172196 containerd[1502]: time="2025-01-13T20:47:20.172145518Z" level=info msg="RemovePodSandbox \"cd31b1c45afe325845bf1a41b83b6931c5191576d807347539dbe288a2fd7d8d\" returns successfully" Jan 13 20:47:20.172421 containerd[1502]: time="2025-01-13T20:47:20.172395973Z" level=info msg="StopPodSandbox for \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\"" Jan 13 20:47:20.172502 containerd[1502]: time="2025-01-13T20:47:20.172478906Z" level=info msg="TearDown network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" successfully" Jan 13 20:47:20.172502 containerd[1502]: time="2025-01-13T20:47:20.172492883Z" level=info msg="StopPodSandbox for \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" returns successfully" Jan 13 20:47:20.172912 containerd[1502]: time="2025-01-13T20:47:20.172854964Z" level=info msg="RemovePodSandbox for \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\"" Jan 13 20:47:20.172912 containerd[1502]: time="2025-01-13T20:47:20.172901551Z" level=info msg="Forcibly stopping sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\"" Jan 13 20:47:20.173016 containerd[1502]: time="2025-01-13T20:47:20.172979455Z" level=info msg="TearDown network for sandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" successfully" Jan 13 20:47:20.176735 containerd[1502]: time="2025-01-13T20:47:20.176698485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.176799 containerd[1502]: time="2025-01-13T20:47:20.176766802Z" level=info msg="RemovePodSandbox \"b62e896d583a9614f1a17b91129b901a844c46e4e097515507c996b35ca4073f\" returns successfully" Jan 13 20:47:20.177129 containerd[1502]: time="2025-01-13T20:47:20.177066288Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:47:20.177193 containerd[1502]: time="2025-01-13T20:47:20.177153680Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:47:20.177193 containerd[1502]: time="2025-01-13T20:47:20.177187774Z" level=info msg="StopPodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:47:20.177512 containerd[1502]: time="2025-01-13T20:47:20.177491237Z" level=info msg="RemovePodSandbox for \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:47:20.177512 containerd[1502]: time="2025-01-13T20:47:20.177510512Z" level=info msg="Forcibly stopping sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\"" Jan 13 20:47:20.177600 containerd[1502]: time="2025-01-13T20:47:20.177572868Z" level=info msg="TearDown network for sandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" successfully" Jan 13 20:47:20.181120 containerd[1502]: time="2025-01-13T20:47:20.181036174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.181120 containerd[1502]: time="2025-01-13T20:47:20.181072390Z" level=info msg="RemovePodSandbox \"f2b31b0495a3f6db09cd4dc9c808664bb83d982a31c57143c3c572892962180c\" returns successfully" Jan 13 20:47:20.181328 containerd[1502]: time="2025-01-13T20:47:20.181307287Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:47:20.181405 containerd[1502]: time="2025-01-13T20:47:20.181382025Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:47:20.181405 containerd[1502]: time="2025-01-13T20:47:20.181395961Z" level=info msg="StopPodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:47:20.181731 containerd[1502]: time="2025-01-13T20:47:20.181693683Z" level=info msg="RemovePodSandbox for \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:47:20.181936 containerd[1502]: time="2025-01-13T20:47:20.181910266Z" level=info msg="Forcibly stopping sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\"" Jan 13 20:47:20.182050 containerd[1502]: time="2025-01-13T20:47:20.182012836Z" level=info msg="TearDown network for sandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" successfully" Jan 13 20:47:20.186027 containerd[1502]: time="2025-01-13T20:47:20.185997890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.186169 containerd[1502]: time="2025-01-13T20:47:20.186043915Z" level=info msg="RemovePodSandbox \"d62f6d8fb0c7b8334c1947d3c0dcece4c4bb9cc560e0bef2ff6051d4ee35645e\" returns successfully" Jan 13 20:47:20.186527 containerd[1502]: time="2025-01-13T20:47:20.186458465Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:47:20.186596 containerd[1502]: time="2025-01-13T20:47:20.186568519Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:47:20.186596 containerd[1502]: time="2025-01-13T20:47:20.186593605Z" level=info msg="StopPodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:47:20.186995 containerd[1502]: time="2025-01-13T20:47:20.186912106Z" level=info msg="RemovePodSandbox for \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:47:20.186995 containerd[1502]: time="2025-01-13T20:47:20.186937863Z" level=info msg="Forcibly stopping sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\"" Jan 13 20:47:20.187095 containerd[1502]: time="2025-01-13T20:47:20.187018874Z" level=info msg="TearDown network for sandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" successfully" Jan 13 20:47:20.190578 containerd[1502]: time="2025-01-13T20:47:20.190488302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.190578 containerd[1502]: time="2025-01-13T20:47:20.190549454Z" level=info msg="RemovePodSandbox \"83e9feedde1119a89d99e05059c760c63e2ff973d1751e25c1ecce159ec454b7\" returns successfully" Jan 13 20:47:20.190839 containerd[1502]: time="2025-01-13T20:47:20.190810910Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:47:20.190980 containerd[1502]: time="2025-01-13T20:47:20.190918700Z" level=info msg="TearDown network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" successfully" Jan 13 20:47:20.190980 containerd[1502]: time="2025-01-13T20:47:20.190971458Z" level=info msg="StopPodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" returns successfully" Jan 13 20:47:20.191329 containerd[1502]: time="2025-01-13T20:47:20.191304486Z" level=info msg="RemovePodSandbox for \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:47:20.191389 containerd[1502]: time="2025-01-13T20:47:20.191329712Z" level=info msg="Forcibly stopping sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\"" Jan 13 20:47:20.191423 containerd[1502]: time="2025-01-13T20:47:20.191398180Z" level=info msg="TearDown network for sandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" successfully" Jan 13 20:47:20.195092 containerd[1502]: time="2025-01-13T20:47:20.195070013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.195138 containerd[1502]: time="2025-01-13T20:47:20.195115046Z" level=info msg="RemovePodSandbox \"073ff71f244115f6d84c228c7d95d8a204a97bf32223e5525bda311a6c1edabe\" returns successfully" Jan 13 20:47:20.195508 containerd[1502]: time="2025-01-13T20:47:20.195374477Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" Jan 13 20:47:20.195508 containerd[1502]: time="2025-01-13T20:47:20.195450909Z" level=info msg="TearDown network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" successfully" Jan 13 20:47:20.195508 containerd[1502]: time="2025-01-13T20:47:20.195459865Z" level=info msg="StopPodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" returns successfully" Jan 13 20:47:20.195724 containerd[1502]: time="2025-01-13T20:47:20.195683100Z" level=info msg="RemovePodSandbox for \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" Jan 13 20:47:20.195724 containerd[1502]: time="2025-01-13T20:47:20.195703307Z" level=info msg="Forcibly stopping sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\"" Jan 13 20:47:20.195911 containerd[1502]: time="2025-01-13T20:47:20.195778657Z" level=info msg="TearDown network for sandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" successfully" Jan 13 20:47:20.199806 containerd[1502]: time="2025-01-13T20:47:20.199774751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.199806 containerd[1502]: time="2025-01-13T20:47:20.199806410Z" level=info msg="RemovePodSandbox \"81a5592e212ade4b470bdb56034edf887157f62ad526c5bf973b092b0eb5e97d\" returns successfully" Jan 13 20:47:20.200128 containerd[1502]: time="2025-01-13T20:47:20.200098482Z" level=info msg="StopPodSandbox for \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\"" Jan 13 20:47:20.200205 containerd[1502]: time="2025-01-13T20:47:20.200186695Z" level=info msg="TearDown network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" successfully" Jan 13 20:47:20.200205 containerd[1502]: time="2025-01-13T20:47:20.200203186Z" level=info msg="StopPodSandbox for \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" returns successfully" Jan 13 20:47:20.200421 containerd[1502]: time="2025-01-13T20:47:20.200400172Z" level=info msg="RemovePodSandbox for \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\"" Jan 13 20:47:20.200421 containerd[1502]: time="2025-01-13T20:47:20.200419608Z" level=info msg="Forcibly stopping sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\"" Jan 13 20:47:20.200509 containerd[1502]: time="2025-01-13T20:47:20.200475441Z" level=info msg="TearDown network for sandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" successfully" Jan 13 20:47:20.205763 containerd[1502]: time="2025-01-13T20:47:20.205731213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:47:20.205830 containerd[1502]: time="2025-01-13T20:47:20.205790593Z" level=info msg="RemovePodSandbox \"8cf8c7020eb89d8a4b63ce6d10ab2da7a80706feb89a817c6cf49f7cba461b50\" returns successfully" Jan 13 20:47:23.599076 kubelet[2604]: E0113 20:47:23.599043 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:25.150005 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:52160.service - OpenSSH per-connection server daemon (10.0.0.1:52160). Jan 13 20:47:25.190844 sshd[6312]: Accepted publickey for core from 10.0.0.1 port 52160 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:25.192344 sshd-session[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:25.196099 systemd-logind[1488]: New session 22 of user core. Jan 13 20:47:25.206022 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:47:25.313849 sshd[6314]: Connection closed by 10.0.0.1 port 52160 Jan 13 20:47:25.314195 sshd-session[6312]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:25.317804 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:52160.service: Deactivated successfully. Jan 13 20:47:25.319770 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:47:25.320440 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:47:25.321364 systemd-logind[1488]: Removed session 22. Jan 13 20:47:30.331726 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:52170.service - OpenSSH per-connection server daemon (10.0.0.1:52170). Jan 13 20:47:30.373036 sshd[6328]: Accepted publickey for core from 10.0.0.1 port 52170 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:30.375066 sshd-session[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:30.379133 systemd-logind[1488]: New session 23 of user core. Jan 13 20:47:30.388143 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:47:30.508349 sshd[6330]: Connection closed by 10.0.0.1 port 52170 Jan 13 20:47:30.508735 sshd-session[6328]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:30.512963 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:52170.service: Deactivated successfully. Jan 13 20:47:30.515078 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:47:30.515962 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:47:30.517070 systemd-logind[1488]: Removed session 23. Jan 13 20:47:34.961353 kubelet[2604]: E0113 20:47:34.961300 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:35.519919 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). Jan 13 20:47:35.559201 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:47:35.560817 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:35.564511 systemd-logind[1488]: New session 24 of user core. Jan 13 20:47:35.575013 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:47:35.678083 sshd[6352]: Connection closed by 10.0.0.1 port 53468 Jan 13 20:47:35.678458 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:35.682539 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:53468.service: Deactivated successfully. Jan 13 20:47:35.684949 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:47:35.685615 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:47:35.686743 systemd-logind[1488]: Removed session 24.