Feb 13 19:52:06.912668 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:52:06.912689 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:52:06.912701 kernel: BIOS-provided physical RAM map: Feb 13 19:52:06.912707 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:52:06.912713 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:52:06.912719 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:52:06.912726 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:52:06.912732 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:52:06.912739 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:52:06.912745 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:52:06.912753 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:52:06.912760 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:52:06.912766 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:52:06.912772 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:52:06.912780 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:52:06.912786 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:52:06.912795 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:52:06.912802 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:52:06.912808 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:52:06.912815 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:52:06.912822 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:52:06.912828 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:52:06.912835 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:52:06.912841 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:52:06.912848 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:52:06.912855 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:52:06.912861 kernel: NX (Execute Disable) protection: active Feb 13 19:52:06.912870 kernel: APIC: Static calls initialized Feb 13 19:52:06.912877 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:52:06.912884 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:52:06.912890 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:52:06.912897 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:52:06.912903 kernel: extended physical RAM map: Feb 13 19:52:06.912910 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:52:06.912917 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:52:06.912923 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:52:06.912930 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:52:06.912937 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:52:06.912945 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:52:06.912952 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:52:06.912962 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:52:06.912969 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:52:06.912976 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:52:06.912983 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:52:06.912990 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:52:06.913000 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:52:06.913007 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:52:06.913014 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:52:06.913021 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:52:06.913028 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:52:06.913035 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:52:06.913042 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:52:06.913049 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:52:06.913056 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:52:06.913065 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:52:06.913072 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:52:06.913079 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:52:06.913095 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:52:06.913103 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:52:06.913110 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:52:06.913117 kernel: efi: EFI v2.7 by EDK II Feb 13 19:52:06.913124 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:52:06.913131 kernel: random: crng init done Feb 13 19:52:06.913138 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:52:06.913145 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:52:06.913154 kernel: secureboot: Secure boot disabled Feb 13 19:52:06.913202 kernel: SMBIOS 2.8 present. Feb 13 19:52:06.913209 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:52:06.913216 kernel: Hypervisor detected: KVM Feb 13 19:52:06.913223 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:52:06.913230 kernel: kvm-clock: using sched offset of 2976399044 cycles Feb 13 19:52:06.913238 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:52:06.913245 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:52:06.913253 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:52:06.913260 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:52:06.913267 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:52:06.913277 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:52:06.913285 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:52:06.913292 kernel: Using GB pages for direct mapping Feb 13 19:52:06.913299 kernel: ACPI: Early table checksum verification disabled Feb 13 19:52:06.913306 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:52:06.913313 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:52:06.913321 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913328 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913335 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:52:06.913344 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913352 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913359 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913366 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:52:06.913373 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:52:06.913380 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:52:06.913387 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:52:06.913394 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:52:06.913403 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:52:06.913411 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:52:06.913418 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:52:06.913425 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:52:06.913432 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:52:06.913439 kernel: No NUMA configuration found Feb 13 19:52:06.913446 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:52:06.913453 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:52:06.913460 kernel: Zone ranges: Feb 13 19:52:06.913467 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:52:06.913477 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:52:06.913484 kernel: Normal empty Feb 13 19:52:06.913491 kernel: Movable zone start for each node Feb 13 19:52:06.913498 kernel: Early memory node ranges Feb 13 19:52:06.913505 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:52:06.913512 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:52:06.913519 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:52:06.913526 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:52:06.913533 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:52:06.913544 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:52:06.913553 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:52:06.913561 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:52:06.913570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:52:06.913579 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:52:06.913588 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:52:06.913607 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:52:06.913618 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:52:06.913628 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:52:06.913637 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:52:06.913647 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:52:06.913656 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:52:06.913668 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:52:06.913677 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:52:06.913687 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:52:06.913696 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:52:06.913706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:52:06.913718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:52:06.913727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:52:06.913737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:52:06.913746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:52:06.913755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:52:06.913764 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:52:06.913771 kernel: TSC deadline timer available Feb 13 19:52:06.913779 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:52:06.913786 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:52:06.913796 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:52:06.913803 kernel: kvm-guest: setup PV sched yield Feb 13 19:52:06.913811 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:52:06.913818 kernel: Booting paravirtualized kernel on KVM Feb 13 19:52:06.913826 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:52:06.913834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:52:06.913841 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:52:06.913849 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:52:06.913856 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:52:06.913865 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:52:06.913873 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:52:06.913882 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:52:06.913890 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:52:06.913897 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:52:06.913905 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:52:06.913912 kernel: Fallback order for Node 0: 0 Feb 13 19:52:06.913920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:52:06.913929 kernel: Policy zone: DMA32 Feb 13 19:52:06.913937 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:52:06.913945 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 19:52:06.913952 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:52:06.913960 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:52:06.913967 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:52:06.913975 kernel: Dynamic Preempt: voluntary Feb 13 19:52:06.913983 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:52:06.913991 kernel: rcu: RCU event tracing is enabled. Feb 13 19:52:06.914001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:52:06.914008 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:52:06.914016 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:52:06.914023 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:52:06.914031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:52:06.914039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:52:06.914046 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:52:06.914054 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:52:06.914061 kernel: Console: colour dummy device 80x25 Feb 13 19:52:06.914069 kernel: printk: console [ttyS0] enabled Feb 13 19:52:06.914078 kernel: ACPI: Core revision 20230628 Feb 13 19:52:06.914092 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:52:06.914101 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:52:06.914108 kernel: x2apic enabled Feb 13 19:52:06.914116 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:52:06.914123 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:52:06.914131 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:52:06.914139 kernel: kvm-guest: setup PV IPIs Feb 13 19:52:06.914146 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:52:06.914156 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:52:06.914174 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:52:06.914182 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:52:06.914190 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:52:06.914197 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:52:06.914205 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:52:06.914212 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:52:06.914220 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:52:06.914227 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:52:06.914238 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:52:06.914245 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:52:06.914253 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:52:06.914260 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:52:06.914268 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:52:06.914276 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:52:06.914284 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:52:06.914291 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:52:06.914301 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:52:06.914309 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:52:06.914316 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:52:06.914324 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:52:06.914331 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:52:06.914339 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:52:06.914346 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:52:06.914354 kernel: landlock: Up and running. Feb 13 19:52:06.914361 kernel: SELinux: Initializing. Feb 13 19:52:06.914371 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:52:06.914378 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:52:06.914386 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:52:06.914394 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:52:06.914401 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:52:06.914409 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:52:06.914416 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:52:06.914424 kernel: ... version: 0 Feb 13 19:52:06.914433 kernel: ... bit width: 48 Feb 13 19:52:06.914441 kernel: ... generic registers: 6 Feb 13 19:52:06.914448 kernel: ... value mask: 0000ffffffffffff Feb 13 19:52:06.914456 kernel: ... max period: 00007fffffffffff Feb 13 19:52:06.914464 kernel: ... fixed-purpose events: 0 Feb 13 19:52:06.914471 kernel: ... event mask: 000000000000003f Feb 13 19:52:06.914479 kernel: signal: max sigframe size: 1776 Feb 13 19:52:06.914486 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:52:06.914494 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:52:06.914501 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:52:06.914511 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:52:06.914518 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:52:06.914526 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:52:06.914533 kernel: smpboot: Max logical packages: 1 Feb 13 19:52:06.914541 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:52:06.914548 kernel: devtmpfs: initialized Feb 13 19:52:06.914556 kernel: x86/mm: Memory block size: 128MB Feb 13 19:52:06.914563 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:52:06.914571 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:52:06.914581 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:52:06.914589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:52:06.914596 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:52:06.914604 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:52:06.914611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:52:06.914619 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:52:06.914627 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:52:06.914635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:52:06.914654 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:52:06.914664 kernel: audit: type=2000 audit(1739476327.081:1): state=initialized audit_enabled=0 res=1 Feb 13 19:52:06.914671 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:52:06.914679 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:52:06.914686 kernel: cpuidle: using governor menu Feb 13 19:52:06.914694 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:52:06.914701 kernel: dca service started, version 1.12.1 Feb 13 19:52:06.914709 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:52:06.914716 kernel: PCI: Using configuration type 1 for base access Feb 13 19:52:06.914724 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:52:06.914733 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:52:06.914741 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:52:06.914748 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:52:06.914756 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:52:06.914764 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:52:06.914771 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:52:06.914778 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:52:06.914786 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:52:06.914794 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:52:06.914803 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:52:06.914811 kernel: ACPI: Interpreter enabled Feb 13 19:52:06.914818 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:52:06.914825 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:52:06.914833 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:52:06.914841 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:52:06.914848 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:52:06.914856 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:52:06.915032 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:52:06.915197 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:52:06.915320 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:52:06.915330 kernel: PCI host bridge to bus 0000:00 Feb 13 19:52:06.915455 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:52:06.915583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:52:06.915697 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:52:06.915812 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:52:06.915924 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:52:06.916035 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:52:06.916154 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:52:06.916325 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:52:06.916461 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:52:06.916586 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:52:06.916706 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:52:06.916826 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:52:06.916945 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:52:06.917065 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:52:06.917223 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:52:06.917346 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:52:06.917473 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:52:06.917603 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:52:06.917736 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:52:06.917857 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:52:06.917976 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:52:06.918104 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:52:06.918263 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:52:06.918392 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:52:06.918512 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:52:06.918635 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:52:06.918861 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:52:06.919014 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:52:06.919146 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:52:06.919316 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:52:06.919445 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:52:06.919565 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:52:06.919693 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:52:06.919812 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:52:06.919823 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:52:06.919831 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:52:06.919838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:52:06.919850 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:52:06.919857 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:52:06.919865 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:52:06.919872 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:52:06.919880 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:52:06.919888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:52:06.919895 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:52:06.919903 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:52:06.919910 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:52:06.919920 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:52:06.919928 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:52:06.919935 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:52:06.919948 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:52:06.919955 kernel: iommu: Default domain type: Translated Feb 13 19:52:06.919963 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:52:06.919971 kernel: efivars: Registered efivars operations Feb 13 19:52:06.919979 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:52:06.919987 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:52:06.919997 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:52:06.920005 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:52:06.920012 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:52:06.920020 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:52:06.920028 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:52:06.920036 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:52:06.920043 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:52:06.920051 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:52:06.920200 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:52:06.920342 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:52:06.920462 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:52:06.920472 kernel: vgaarb: loaded Feb 13 19:52:06.920480 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:52:06.920488 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:52:06.920496 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:52:06.920503 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:52:06.920511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:52:06.920523 kernel: pnp: PnP ACPI init Feb 13 19:52:06.920656 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:52:06.920668 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:52:06.920676 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:52:06.920683 kernel: NET: Registered PF_INET protocol family Feb 13 19:52:06.920709 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:52:06.920719 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:52:06.920727 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:52:06.920738 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:52:06.920746 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:52:06.920753 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:52:06.920762 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:52:06.920769 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:52:06.920777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:52:06.920785 kernel: NET: Registered PF_XDP protocol family Feb 13 19:52:06.920909 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:52:06.921040 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:52:06.921260 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:52:06.921408 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:52:06.921536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:52:06.921660 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:52:06.921778 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:52:06.921889 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:52:06.921899 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:52:06.921912 kernel: Initialise system trusted keyrings Feb 13 19:52:06.921920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:52:06.921929 kernel: Key type asymmetric registered Feb 13 19:52:06.921939 kernel: Asymmetric key parser 'x509' registered Feb 13 19:52:06.921947 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:52:06.921954 kernel: io scheduler mq-deadline registered Feb 13 19:52:06.921962 kernel: io scheduler kyber registered Feb 13 19:52:06.921970 kernel: io scheduler bfq registered Feb 13 19:52:06.921978 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:52:06.921987 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:52:06.921997 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:52:06.922007 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:52:06.922015 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:52:06.922023 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:52:06.922031 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:52:06.922042 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:52:06.922050 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:52:06.922219 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:52:06.922337 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:52:06.922348 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 19:52:06.922460 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:52:06 UTC (1739476326) Feb 13 19:52:06.922575 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:52:06.922585 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:52:06.922597 kernel: efifb: probing for efifb Feb 13 19:52:06.922605 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:52:06.922613 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:52:06.922621 kernel: efifb: scrolling: redraw Feb 13 19:52:06.922629 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:52:06.922638 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:52:06.922646 kernel: fb0: EFI VGA frame buffer device Feb 13 19:52:06.922654 kernel: pstore: Using crash dump compression: deflate Feb 13 19:52:06.922662 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:52:06.922672 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:52:06.922680 kernel: Segment Routing with IPv6 Feb 13 19:52:06.922688 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:52:06.922696 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:52:06.922704 kernel: Key type dns_resolver registered Feb 13 19:52:06.922711 kernel: IPI shorthand broadcast: enabled Feb 13 19:52:06.922719 kernel: sched_clock: Marking stable (646002900, 168619682)->(842345127, -27722545) Feb 13 19:52:06.922727 kernel: registered taskstats version 1 Feb 13 19:52:06.922735 kernel: Loading compiled-in X.509 certificates Feb 13 19:52:06.922746 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:52:06.922753 kernel: Key type .fscrypt registered Feb 13 19:52:06.922761 kernel: Key type fscrypt-provisioning registered Feb 13 19:52:06.922769 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:52:06.922777 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:52:06.922794 kernel: ima: No architecture policies found Feb 13 19:52:06.922803 kernel: clk: Disabling unused clocks Feb 13 19:52:06.922811 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:52:06.922819 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:52:06.922830 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:52:06.922838 kernel: Run /init as init process Feb 13 19:52:06.922846 kernel: with arguments: Feb 13 19:52:06.922854 kernel: /init Feb 13 19:52:06.922862 kernel: with environment: Feb 13 19:52:06.922870 kernel: HOME=/ Feb 13 19:52:06.922878 kernel: TERM=linux Feb 13 19:52:06.922885 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:52:06.922896 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:52:06.922908 systemd[1]: Detected virtualization kvm. Feb 13 19:52:06.922917 systemd[1]: Detected architecture x86-64. Feb 13 19:52:06.922925 systemd[1]: Running in initrd. Feb 13 19:52:06.922933 systemd[1]: No hostname configured, using default hostname. Feb 13 19:52:06.922942 systemd[1]: Hostname set to . Feb 13 19:52:06.922950 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:52:06.922959 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:52:06.922970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:52:06.922978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:52:06.922987 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:52:06.922996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:52:06.923005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:52:06.923013 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:52:06.923024 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:52:06.923034 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:52:06.923043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:52:06.923051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:52:06.923060 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:52:06.923068 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:52:06.923077 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:52:06.923094 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:52:06.923103 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:52:06.923114 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:52:06.923122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:52:06.923131 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:52:06.923139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:52:06.923148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:52:06.923156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:52:06.923222 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:52:06.923230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:52:06.923239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:52:06.923250 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:52:06.923259 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:52:06.923267 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:52:06.923276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:52:06.923284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:06.923292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:52:06.923301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:52:06.923309 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:52:06.923339 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 19:52:06.923361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:52:06.923370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:06.923379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:52:06.923388 systemd-journald[192]: Journal started Feb 13 19:52:06.923406 systemd-journald[192]: Runtime Journal (/run/log/journal/c878477f4cb04861b35cb2a8d8af57d1) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:52:06.921607 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:52:06.927714 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:52:06.930911 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:52:06.933842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:52:06.934641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:52:06.951540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:52:06.951437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:52:06.955806 kernel: Bridge firewalling registered Feb 13 19:52:06.953364 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:52:06.959335 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:52:06.959668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:52:06.961420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:52:06.963539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:52:06.968831 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:52:06.980357 dracut-cmdline[220]: dracut-dracut-053 Feb 13 19:52:06.981303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:52:06.983422 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:52:06.991406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:52:07.021843 systemd-resolved[240]: Positive Trust Anchors: Feb 13 19:52:07.021863 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:52:07.021893 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:52:07.024561 systemd-resolved[240]: Defaulting to hostname 'linux'. Feb 13 19:52:07.025643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:52:07.032205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:52:07.074196 kernel: SCSI subsystem initialized Feb 13 19:52:07.084186 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:52:07.095191 kernel: iscsi: registered transport (tcp) Feb 13 19:52:07.122202 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:52:07.122272 kernel: QLogic iSCSI HBA Driver Feb 13 19:52:07.176296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:52:07.186406 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:52:07.212745 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:52:07.212819 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:52:07.214009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:52:07.260211 kernel: raid6: avx2x4 gen() 28617 MB/s Feb 13 19:52:07.277217 kernel: raid6: avx2x2 gen() 29386 MB/s Feb 13 19:52:07.294357 kernel: raid6: avx2x1 gen() 25445 MB/s Feb 13 19:52:07.294461 kernel: raid6: using algorithm avx2x2 gen() 29386 MB/s Feb 13 19:52:07.312351 kernel: raid6: .... xor() 18752 MB/s, rmw enabled Feb 13 19:52:07.312443 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:52:07.333202 kernel: xor: automatically using best checksumming function avx Feb 13 19:52:07.526200 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:52:07.539399 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:52:07.551413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:52:07.567817 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 19:52:07.573042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:52:07.581384 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:52:07.595660 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 13 19:52:07.629936 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:52:07.645409 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:52:07.723027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:52:07.740391 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:52:07.754229 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:52:07.757700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:52:07.760333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:52:07.766231 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:52:07.763891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:52:07.772296 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:52:07.804818 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:52:07.804849 kernel: AES CTR mode by8 optimization enabled Feb 13 19:52:07.804863 kernel: libata version 3.00 loaded. Feb 13 19:52:07.804878 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:52:07.805087 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:52:07.805103 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:52:07.825242 kernel: GPT:9289727 != 19775487 Feb 13 19:52:07.825280 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:52:07.825299 kernel: GPT:9289727 != 19775487 Feb 13 19:52:07.825315 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:52:07.825330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:52:07.825345 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:52:07.825359 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:52:07.825580 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:52:07.825765 kernel: scsi host0: ahci Feb 13 19:52:07.825971 kernel: scsi host1: ahci Feb 13 19:52:07.826379 kernel: scsi host2: ahci Feb 13 19:52:07.826552 kernel: scsi host3: ahci Feb 13 19:52:07.826705 kernel: scsi host4: ahci Feb 13 19:52:07.826850 kernel: scsi host5: ahci Feb 13 19:52:07.826995 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:52:07.827007 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:52:07.827035 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:52:07.827046 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:52:07.827065 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:52:07.827075 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:52:07.772575 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:52:07.787108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:52:07.799907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:52:07.841141 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Feb 13 19:52:07.800202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:52:07.844901 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (478) Feb 13 19:52:07.805552 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:52:07.807196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:52:07.807497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:07.810907 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:07.818831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:07.846773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:52:07.868339 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:52:07.868843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:07.885881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:52:07.892715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:52:07.892897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:52:07.908488 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:52:07.909744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:52:07.925614 disk-uuid[560]: Primary Header is updated. Feb 13 19:52:07.925614 disk-uuid[560]: Secondary Entries is updated. Feb 13 19:52:07.925614 disk-uuid[560]: Secondary Header is updated. Feb 13 19:52:07.931211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:52:07.938593 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:52:08.138763 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:52:08.138867 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:52:08.138886 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:52:08.140199 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:52:08.141200 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:52:08.148293 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:52:08.148382 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:52:08.148419 kernel: ata3.00: applying bridge limits Feb 13 19:52:08.149521 kernel: ata3.00: configured for UDMA/100 Feb 13 19:52:08.150206 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:52:08.196220 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:52:08.210229 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:52:08.210255 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:52:08.938202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:52:08.938974 disk-uuid[565]: The operation has completed successfully. Feb 13 19:52:08.969893 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:52:08.970076 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:52:08.997400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:52:09.003069 sh[596]: Success Feb 13 19:52:09.017192 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:52:09.054544 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:52:09.079286 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:52:09.082835 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:52:09.095728 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:52:09.095805 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:52:09.095823 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:52:09.096827 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:52:09.097667 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:52:09.103968 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:52:09.106615 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:52:09.130541 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:52:09.132963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:52:09.147921 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:52:09.147982 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:52:09.147997 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:52:09.151189 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:52:09.161570 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:52:09.163576 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:52:09.175750 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:52:09.183444 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:52:09.244988 ignition[698]: Ignition 2.20.0 Feb 13 19:52:09.245001 ignition[698]: Stage: fetch-offline Feb 13 19:52:09.245050 ignition[698]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:09.245060 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:09.245148 ignition[698]: parsed url from cmdline: "" Feb 13 19:52:09.245152 ignition[698]: no config URL provided Feb 13 19:52:09.245157 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:52:09.245180 ignition[698]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:52:09.245209 ignition[698]: op(1): [started] loading QEMU firmware config module Feb 13 19:52:09.245216 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:52:09.253788 ignition[698]: op(1): [finished] loading QEMU firmware config module Feb 13 19:52:09.260450 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:52:09.272437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:52:09.297156 systemd-networkd[784]: lo: Link UP Feb 13 19:52:09.297185 systemd-networkd[784]: lo: Gained carrier Feb 13 19:52:09.300146 ignition[698]: parsing config with SHA512: 7c7b7f1aba93dda1d5f0f51b8c88a4e0016ba85c2ec7abe2a854424019075f8cba7383c411bd388e19fadc058d54307e302775c1460456e5a70c7490111dcf02 Feb 13 19:52:09.301093 systemd-networkd[784]: Enumeration completed Feb 13 19:52:09.301270 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:52:09.302189 systemd[1]: Reached target network.target - Network. Feb 13 19:52:09.305717 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:09.305724 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:52:09.306855 ignition[698]: fetch-offline: fetch-offline passed Feb 13 19:52:09.306465 unknown[698]: fetched base config from "system" Feb 13 19:52:09.306934 ignition[698]: Ignition finished successfully Feb 13 19:52:09.306472 unknown[698]: fetched user config from "qemu" Feb 13 19:52:09.307206 systemd-networkd[784]: eth0: Link UP Feb 13 19:52:09.307209 systemd-networkd[784]: eth0: Gained carrier Feb 13 19:52:09.307218 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:09.313805 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:52:09.322624 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:52:09.334244 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:52:09.334316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:52:09.348114 ignition[787]: Ignition 2.20.0 Feb 13 19:52:09.348125 ignition[787]: Stage: kargs Feb 13 19:52:09.348313 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:09.348324 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:09.349147 ignition[787]: kargs: kargs passed Feb 13 19:52:09.349207 ignition[787]: Ignition finished successfully Feb 13 19:52:09.357342 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:52:09.383463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:52:09.395089 ignition[796]: Ignition 2.20.0 Feb 13 19:52:09.395102 ignition[796]: Stage: disks Feb 13 19:52:09.395288 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:09.395300 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:09.396131 ignition[796]: disks: disks passed Feb 13 19:52:09.396195 ignition[796]: Ignition finished successfully Feb 13 19:52:09.401880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:52:09.403217 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:52:09.405288 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:52:09.405535 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:52:09.405896 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:52:09.406455 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:52:09.428465 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:52:09.442665 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:52:09.450323 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:52:09.458338 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:52:09.567196 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:52:09.568190 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:52:09.568919 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:52:09.583341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:52:09.585796 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:52:09.586214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:52:09.593799 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Feb 13 19:52:09.593837 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:52:09.586261 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:52:09.600784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:52:09.600826 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:52:09.600843 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:52:09.586287 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:52:09.603428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:52:09.624576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:52:09.626971 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:52:09.673585 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:52:09.679991 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:52:09.685684 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:52:09.691317 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:52:09.794114 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:52:09.806282 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:52:09.808109 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:52:09.814188 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:52:09.834414 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:52:09.834414 ignition[927]: INFO : Stage: mount Feb 13 19:52:09.837466 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:09.837466 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:09.837466 ignition[927]: INFO : mount: mount passed Feb 13 19:52:09.837466 ignition[927]: INFO : Ignition finished successfully Feb 13 19:52:09.834869 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:52:09.837444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:52:09.845648 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:52:10.094860 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:52:10.108563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:52:10.116204 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Feb 13 19:52:10.116285 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:52:10.118355 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:52:10.118390 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:52:10.122182 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:52:10.123890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:52:10.144676 ignition[958]: INFO : Ignition 2.20.0 Feb 13 19:52:10.144676 ignition[958]: INFO : Stage: files Feb 13 19:52:10.147025 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:10.147025 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:10.147025 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:52:10.147025 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:52:10.147025 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:52:10.155041 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:52:10.155041 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:52:10.155041 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:52:10.155041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:52:10.155041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:52:10.155041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:52:10.155041 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:52:10.150278 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 19:52:10.201577 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:52:10.343467 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:52:10.343467 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:52:10.348372 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:52:10.363799 systemd-networkd[784]: eth0: Gained IPv6LL Feb 13 19:52:10.762232 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:52:11.039133 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:52:11.039133 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 19:52:11.043802 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:52:11.070533 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:52:11.084309 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:52:11.086309 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:52:11.086309 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:52:11.089654 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:52:11.091376 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:52:11.093474 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:52:11.095483 ignition[958]: INFO : files: files passed Feb 13 19:52:11.096382 ignition[958]: INFO : Ignition finished successfully Feb 13 19:52:11.098438 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:52:11.112296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:52:11.114281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:52:11.116380 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:52:11.116487 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:52:11.123323 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:52:11.125545 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:52:11.145821 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:52:11.128151 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:52:11.149753 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:52:11.146275 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:52:11.156276 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:52:11.179775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:52:11.179888 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:52:11.182313 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:52:11.184476 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:52:11.184737 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:52:11.189191 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:52:11.222389 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:52:11.224010 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:52:11.238214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:52:11.239551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:52:11.241888 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:52:11.244005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:52:11.244176 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:52:11.246546 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:52:11.248397 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:52:11.250554 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:52:11.252718 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:52:11.255013 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:52:11.257258 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:52:11.259457 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:52:11.262077 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:52:11.264195 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:52:11.266674 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:52:11.268813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:52:11.268986 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:52:11.271402 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:52:11.272840 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:52:11.275020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:52:11.275195 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:52:11.277386 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:52:11.277504 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:52:11.279712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:52:11.279820 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:52:11.281839 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:52:11.283624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:52:11.287219 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:52:11.287364 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:52:11.287562 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:52:11.287770 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:52:11.287866 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:52:11.288214 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:52:11.288300 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:52:11.288830 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:52:11.288981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:52:11.289606 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:52:11.289740 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:52:11.303395 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:52:11.318449 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 19:52:11.318449 ignition[1013]: INFO : Stage: umount Feb 13 19:52:11.318449 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:52:11.318449 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:52:11.318449 ignition[1013]: INFO : umount: umount passed Feb 13 19:52:11.318449 ignition[1013]: INFO : Ignition finished successfully Feb 13 19:52:11.305647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:52:11.306628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:52:11.306751 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:52:11.308888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:52:11.309076 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:52:11.315646 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:52:11.315799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:52:11.318795 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:52:11.318913 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:52:11.322410 systemd[1]: Stopped target network.target - Network. Feb 13 19:52:11.324586 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:52:11.324640 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:52:11.326511 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:52:11.326558 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:52:11.328451 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:52:11.328497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:52:11.330615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:52:11.330665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:52:11.333080 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:52:11.335620 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:52:11.338733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:52:11.340223 systemd-networkd[784]: eth0: DHCPv6 lease lost Feb 13 19:52:11.342524 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:52:11.342652 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:52:11.344965 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:52:11.345007 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:52:11.355265 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:52:11.357313 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:52:11.357367 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:52:11.359745 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:52:11.362551 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:52:11.362662 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:52:11.366951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:52:11.367036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:52:11.368344 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:52:11.368390 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:52:11.370516 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:52:11.370564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:52:11.374080 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:52:11.374203 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:52:11.379841 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:52:11.380022 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:52:11.381951 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:52:11.382007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:52:11.383815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:52:11.383853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:52:11.385843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:52:11.385891 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:52:11.388062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:52:11.388110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:52:11.390064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:52:11.390111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:52:11.402364 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:52:11.403662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:52:11.403728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:52:11.405985 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:52:11.406036 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:52:11.408237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:52:11.408286 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:52:11.410589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:52:11.410634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:11.413023 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:52:11.413132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:52:11.492047 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:52:11.492238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:52:11.494780 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:52:11.496367 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:52:11.496435 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:52:11.513544 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:52:11.520933 systemd[1]: Switching root. Feb 13 19:52:11.547520 systemd-journald[192]: Journal stopped Feb 13 19:52:12.715451 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 19:52:12.715538 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:52:12.715560 kernel: SELinux: policy capability open_perms=1 Feb 13 19:52:12.715574 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:52:12.715589 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:52:12.715603 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:52:12.715619 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:52:12.715633 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:52:12.715648 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:52:12.715664 kernel: audit: type=1403 audit(1739476331.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:52:12.715680 systemd[1]: Successfully loaded SELinux policy in 43.574ms. Feb 13 19:52:12.715715 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.866ms. Feb 13 19:52:12.715733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:52:12.715750 systemd[1]: Detected virtualization kvm. Feb 13 19:52:12.715766 systemd[1]: Detected architecture x86-64. Feb 13 19:52:12.715789 systemd[1]: Detected first boot. Feb 13 19:52:12.715806 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:52:12.715822 zram_generator::config[1073]: No configuration found. Feb 13 19:52:12.715840 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:52:12.715860 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:52:12.715876 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:52:12.715896 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:52:12.715912 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:52:12.715938 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:52:12.715953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:52:12.715970 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:52:12.715986 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:52:12.716002 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:52:12.716022 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:52:12.716039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:52:12.716054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:52:12.716070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:52:12.716084 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:52:12.716100 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:52:12.716115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:52:12.716130 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:52:12.716147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:52:12.716178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:52:12.716195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:52:12.716222 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:52:12.716239 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:52:12.716253 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:52:12.716269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:52:12.716283 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:52:12.716303 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:52:12.716318 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:52:12.716332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:52:12.716347 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:52:12.716361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:52:12.716375 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:52:12.716390 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:52:12.716404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:52:12.716429 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:52:12.716449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:12.716467 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:52:12.716483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:52:12.716498 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:52:12.716514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:52:12.716530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:12.716545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:52:12.716560 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:52:12.716576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:12.716597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:52:12.716616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:12.716632 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:52:12.716648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:12.716665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:52:12.716682 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:52:12.716699 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:52:12.716715 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:52:12.716736 kernel: loop: module loaded Feb 13 19:52:12.716752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:52:12.716767 kernel: fuse: init (API version 7.39) Feb 13 19:52:12.716783 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:52:12.716799 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:52:12.716816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:52:12.716832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:12.716849 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:52:12.716890 systemd-journald[1156]: Collecting audit messages is disabled. Feb 13 19:52:12.716937 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:52:12.716955 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:52:12.716976 kernel: ACPI: bus type drm_connector registered Feb 13 19:52:12.716992 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:52:12.717008 systemd-journald[1156]: Journal started Feb 13 19:52:12.720463 systemd-journald[1156]: Runtime Journal (/run/log/journal/c878477f4cb04861b35cb2a8d8af57d1) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:52:12.720539 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:52:12.723437 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:52:12.723493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:52:12.725808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:52:12.727502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:52:12.727735 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:52:12.733547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:12.733765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:12.735276 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:52:12.735491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:52:12.736901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:12.737123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:12.738896 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:52:12.739117 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:52:12.740565 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:12.740790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:12.742615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:52:12.744363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:52:12.749880 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:52:12.764565 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:52:12.773263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:52:12.776273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:52:12.778255 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:52:12.781344 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:52:12.785996 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:52:12.787239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:52:12.791260 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:52:12.792572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:52:12.798289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:52:12.802851 systemd-journald[1156]: Time spent on flushing to /var/log/journal/c878477f4cb04861b35cb2a8d8af57d1 is 15.745ms for 1026 entries. Feb 13 19:52:12.802851 systemd-journald[1156]: System Journal (/var/log/journal/c878477f4cb04861b35cb2a8d8af57d1) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:52:12.855397 systemd-journald[1156]: Received client request to flush runtime journal. Feb 13 19:52:12.803300 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:52:12.810935 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:52:12.812548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:52:12.814044 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:52:12.815688 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:52:12.820743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:52:12.827853 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:52:12.842433 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:52:12.844548 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:52:12.855358 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:52:12.856106 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Feb 13 19:52:12.856120 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Feb 13 19:52:12.857394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:52:12.864840 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:52:12.872401 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:52:12.903660 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:52:12.911342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:52:12.931630 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Feb 13 19:52:12.931652 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Feb 13 19:52:12.937444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:52:13.483712 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:52:13.499418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:52:13.528540 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Feb 13 19:52:13.547949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:52:13.563543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:52:13.576333 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:52:13.591486 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:52:13.614213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1247) Feb 13 19:52:13.663323 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:52:13.675388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:52:13.681366 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:52:13.689184 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:52:13.692303 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:52:13.692319 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:52:13.692497 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:52:13.693437 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:52:13.701183 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:52:13.738256 systemd-networkd[1245]: lo: Link UP Feb 13 19:52:13.738641 systemd-networkd[1245]: lo: Gained carrier Feb 13 19:52:13.742556 systemd-networkd[1245]: Enumeration completed Feb 13 19:52:13.742704 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:52:13.743268 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:13.743273 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:52:13.745645 systemd-networkd[1245]: eth0: Link UP Feb 13 19:52:13.745653 systemd-networkd[1245]: eth0: Gained carrier Feb 13 19:52:13.745678 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:52:13.747209 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:52:13.756608 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:52:13.763525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:52:13.767244 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:52:13.828693 kernel: kvm_amd: TSC scaling supported Feb 13 19:52:13.828766 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:52:13.828805 kernel: kvm_amd: Nested Paging enabled Feb 13 19:52:13.828822 kernel: kvm_amd: LBR virtualization supported Feb 13 19:52:13.829391 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:52:13.830665 kernel: kvm_amd: Virtual GIF supported Feb 13 19:52:13.851196 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:52:13.860602 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:52:13.890866 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:52:13.904465 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:52:13.913868 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:52:13.950976 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:52:13.952578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:52:13.966321 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:52:13.971247 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:52:14.005723 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:52:14.007947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:52:14.009490 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:52:14.009518 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:52:14.010591 systemd[1]: Reached target machines.target - Containers. Feb 13 19:52:14.012958 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:52:14.025365 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:52:14.028561 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:52:14.030212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:14.032027 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:52:14.035569 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:52:14.039383 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:52:14.040407 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:52:14.052187 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 19:52:14.122870 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:52:14.136206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:52:14.183193 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 19:52:14.234198 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:52:14.269192 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 19:52:14.596266 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 19:52:14.608192 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 19:52:14.616729 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:52:14.617535 (sd-merge)[1307]: Merged extensions into '/usr'. Feb 13 19:52:14.622442 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:52:14.622460 systemd[1]: Reloading... Feb 13 19:52:14.685206 zram_generator::config[1339]: No configuration found. Feb 13 19:52:14.714133 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:52:14.804659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:14.870357 systemd[1]: Reloading finished in 247 ms. Feb 13 19:52:14.890083 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:52:14.929507 systemd[1]: Starting ensure-sysext.service... Feb 13 19:52:14.932942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:52:14.937320 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:52:14.937342 systemd[1]: Reloading... Feb 13 19:52:14.958646 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:52:14.959049 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:52:14.960070 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:52:14.960383 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Feb 13 19:52:14.960463 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Feb 13 19:52:14.989197 zram_generator::config[1409]: No configuration found. Feb 13 19:52:15.025489 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:52:15.025505 systemd-tmpfiles[1379]: Skipping /boot Feb 13 19:52:15.036453 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:52:15.036472 systemd-tmpfiles[1379]: Skipping /boot Feb 13 19:52:15.163329 systemd-networkd[1245]: eth0: Gained IPv6LL Feb 13 19:52:15.204627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:15.272002 systemd[1]: Reloading finished in 334 ms. Feb 13 19:52:15.296401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:52:15.309753 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:52:15.317715 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:15.332651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:52:15.335730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:52:15.341459 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:52:15.347568 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:52:15.351664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.351890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:15.356458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:15.360499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:15.369865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:15.371781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:15.371969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.373646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:15.374288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:15.377875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:15.383067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:15.386560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:52:15.390576 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:15.390872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:15.397727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.397959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:15.407247 augenrules[1495]: No rules Feb 13 19:52:15.410367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:15.421338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:15.424417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:15.425660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:15.425772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.426803 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:15.427200 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:15.429134 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:52:15.431050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:15.431295 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:15.433824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:15.434098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:15.435939 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:15.436244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:15.445939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.459524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:15.461290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:52:15.463221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:52:15.470459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:52:15.471650 systemd-resolved[1462]: Positive Trust Anchors: Feb 13 19:52:15.471671 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:52:15.471712 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:52:15.475319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:52:15.477493 systemd-resolved[1462]: Defaulting to hostname 'linux'. Feb 13 19:52:15.481279 augenrules[1512]: /sbin/augenrules: No change Feb 13 19:52:15.481406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:52:15.482598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:52:15.482717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:52:15.483572 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:52:15.486298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:52:15.486523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:52:15.489622 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:52:15.489836 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:52:15.491885 augenrules[1536]: No rules Feb 13 19:52:15.491919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:52:15.492142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:52:15.494142 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:15.494494 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:15.496958 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:52:15.497388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:52:15.501551 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:52:15.504810 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:52:15.506905 systemd[1]: Finished ensure-sysext.service. Feb 13 19:52:15.516412 systemd[1]: Reached target network.target - Network. Feb 13 19:52:15.517475 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:52:15.518612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:52:15.519943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:52:15.520020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:52:15.530363 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:52:15.533240 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:52:15.534363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:52:15.657804 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:52:16.936493 systemd-resolved[1462]: Clock change detected. Flushing caches. Feb 13 19:52:16.936546 systemd-timesyncd[1555]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:52:16.936591 systemd-timesyncd[1555]: Initial clock synchronization to Thu 2025-02-13 19:52:16.936444 UTC. Feb 13 19:52:16.937731 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:52:16.942487 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:52:16.943983 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:52:16.945320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:52:16.946802 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:52:16.948423 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:52:16.949815 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:52:16.951249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:52:16.952671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:52:16.952707 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:52:16.953725 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:52:16.956291 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:52:16.959772 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:52:16.962809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:52:16.974008 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:52:16.975421 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:52:16.976620 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:52:16.977957 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:52:16.978004 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:52:16.978035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:52:16.979811 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:52:16.982722 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:52:16.985513 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:52:16.990317 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:52:16.995086 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:52:16.996380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:52:16.996648 jq[1566]: false Feb 13 19:52:16.999482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:17.004910 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:52:17.011489 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:52:17.015609 extend-filesystems[1567]: Found loop3 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found loop4 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found loop5 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found sr0 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda1 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda2 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda3 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found usr Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda4 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda6 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda7 Feb 13 19:52:17.100288 extend-filesystems[1567]: Found vda9 Feb 13 19:52:17.100288 extend-filesystems[1567]: Checking size of /dev/vda9 Feb 13 19:52:17.100289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:52:17.101990 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:52:17.109940 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:52:17.116685 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:52:17.118231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:52:17.119815 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:52:17.122602 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:52:17.129303 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:52:17.129774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:52:17.130857 jq[1586]: true Feb 13 19:52:17.147362 jq[1592]: true Feb 13 19:52:17.377483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:52:17.378291 dbus-daemon[1564]: [system] SELinux support is enabled Feb 13 19:52:17.379894 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:52:17.381503 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:52:17.394806 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:52:17.395153 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:52:17.396307 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:52:17.397093 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:52:17.397449 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:52:17.401375 update_engine[1585]: I20250213 19:52:17.400506 1585 main.cc:92] Flatcar Update Engine starting Feb 13 19:52:17.403974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:52:17.404066 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:52:17.404090 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:52:17.404734 systemd-logind[1584]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:52:17.404756 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:52:17.407053 update_engine[1585]: I20250213 19:52:17.405344 1585 update_check_scheduler.cc:74] Next update check in 7m10s Feb 13 19:52:17.407088 tar[1590]: linux-amd64/helm Feb 13 19:52:17.408175 systemd-logind[1584]: New seat seat0. Feb 13 19:52:17.456141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:52:17.460573 extend-filesystems[1567]: Resized partition /dev/vda9 Feb 13 19:52:17.456178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:52:17.463441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:52:17.464322 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:52:17.466325 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:52:17.468420 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:52:17.473653 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:52:17.477626 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:52:17.482003 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1251) Feb 13 19:52:17.490619 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:52:17.521633 sshd_keygen[1638]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:52:17.556559 extend-filesystems[1654]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:52:17.566646 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:52:17.591640 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:52:17.599683 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:52:17.600055 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:52:17.619679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:52:17.670636 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:52:17.681880 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:52:17.684282 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:52:17.688483 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:52:17.690149 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:52:17.797951 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:52:17.812976 tar[1590]: linux-amd64/LICENSE Feb 13 19:52:17.813105 tar[1590]: linux-amd64/README.md Feb 13 19:52:17.828606 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:52:18.832243 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:52:19.974381 containerd[1627]: time="2025-02-13T19:52:19.974280524Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:52:19.995353 containerd[1627]: time="2025-02-13T19:52:19.995276301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997024 containerd[1627]: time="2025-02-13T19:52:19.996955870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997024 containerd[1627]: time="2025-02-13T19:52:19.997000825Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:52:19.997024 containerd[1627]: time="2025-02-13T19:52:19.997020061Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:52:19.997281 containerd[1627]: time="2025-02-13T19:52:19.997246175Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:52:19.997281 containerd[1627]: time="2025-02-13T19:52:19.997270130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997371 containerd[1627]: time="2025-02-13T19:52:19.997342385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997371 containerd[1627]: time="2025-02-13T19:52:19.997360018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997659 containerd[1627]: time="2025-02-13T19:52:19.997627470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997659 containerd[1627]: time="2025-02-13T19:52:19.997647517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997703 containerd[1627]: time="2025-02-13T19:52:19.997660231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997703 containerd[1627]: time="2025-02-13T19:52:19.997671322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.997806 containerd[1627]: time="2025-02-13T19:52:19.997776228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.998116 containerd[1627]: time="2025-02-13T19:52:19.998081892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:52:19.998325 containerd[1627]: time="2025-02-13T19:52:19.998292026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:52:19.998325 containerd[1627]: time="2025-02-13T19:52:19.998312253Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:52:19.998440 containerd[1627]: time="2025-02-13T19:52:19.998412321Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:52:19.998493 containerd[1627]: time="2025-02-13T19:52:19.998472163Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:52:20.072582 extend-filesystems[1654]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:52:20.072582 extend-filesystems[1654]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:52:20.072582 extend-filesystems[1654]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:52:20.077422 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Feb 13 19:52:20.080234 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:52:20.080606 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:52:20.262296 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:52:20.264403 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:52:20.288899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:52:20.422439 containerd[1627]: time="2025-02-13T19:52:20.422340206Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:52:20.422439 containerd[1627]: time="2025-02-13T19:52:20.422431307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:52:20.422439 containerd[1627]: time="2025-02-13T19:52:20.422450593Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:52:20.422628 containerd[1627]: time="2025-02-13T19:52:20.422470751Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:52:20.422628 containerd[1627]: time="2025-02-13T19:52:20.422488183Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:52:20.422809 containerd[1627]: time="2025-02-13T19:52:20.422726991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:52:20.423967 containerd[1627]: time="2025-02-13T19:52:20.423914157Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:52:20.424073 containerd[1627]: time="2025-02-13T19:52:20.424051454Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:52:20.424112 containerd[1627]: time="2025-02-13T19:52:20.424071121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:52:20.424112 containerd[1627]: time="2025-02-13T19:52:20.424086270Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:52:20.424112 containerd[1627]: time="2025-02-13T19:52:20.424100877Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424114723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424127938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424144449Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424160659Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424174826Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424187660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424183 containerd[1627]: time="2025-02-13T19:52:20.424218458Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424241611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424255697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424269333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424283349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424297335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424312955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424325628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424340166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424355294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424372015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424385631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424401 containerd[1627]: time="2025-02-13T19:52:20.424401411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424700 containerd[1627]: time="2025-02-13T19:52:20.424418162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424700 containerd[1627]: time="2025-02-13T19:52:20.424440223Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:52:20.424700 containerd[1627]: time="2025-02-13T19:52:20.424462084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424700 containerd[1627]: time="2025-02-13T19:52:20.424482012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424700 containerd[1627]: time="2025-02-13T19:52:20.424494515Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:52:20.424841 containerd[1627]: time="2025-02-13T19:52:20.424742139Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:52:20.424841 containerd[1627]: time="2025-02-13T19:52:20.424789288Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:52:20.424841 containerd[1627]: time="2025-02-13T19:52:20.424809125Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:52:20.424841 containerd[1627]: time="2025-02-13T19:52:20.424830365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:52:20.424940 containerd[1627]: time="2025-02-13T19:52:20.424845784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.424940 containerd[1627]: time="2025-02-13T19:52:20.424866963Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:52:20.424940 containerd[1627]: time="2025-02-13T19:52:20.424880990Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:52:20.424940 containerd[1627]: time="2025-02-13T19:52:20.424900466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:52:20.425964 containerd[1627]: time="2025-02-13T19:52:20.425501583Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:52:20.425964 containerd[1627]: time="2025-02-13T19:52:20.425568469Z" level=info msg="Connect containerd service" Feb 13 19:52:20.425964 containerd[1627]: time="2025-02-13T19:52:20.425618703Z" level=info msg="using legacy CRI server" Feb 13 19:52:20.425964 containerd[1627]: time="2025-02-13T19:52:20.425626958Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:52:20.425964 containerd[1627]: time="2025-02-13T19:52:20.425787540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:52:20.426815 containerd[1627]: time="2025-02-13T19:52:20.426760334Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:52:20.427163 containerd[1627]: time="2025-02-13T19:52:20.427105681Z" level=info msg="Start subscribing containerd event" Feb 13 19:52:20.427213 containerd[1627]: time="2025-02-13T19:52:20.427166786Z" level=info msg="Start recovering state" Feb 13 19:52:20.427308 containerd[1627]: time="2025-02-13T19:52:20.427289566Z" level=info msg="Start event monitor" Feb 13 19:52:20.427372 containerd[1627]: time="2025-02-13T19:52:20.427298593Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:52:20.427372 containerd[1627]: time="2025-02-13T19:52:20.427315555Z" level=info msg="Start snapshots syncer" Feb 13 19:52:20.427372 containerd[1627]: time="2025-02-13T19:52:20.427336714Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:52:20.427372 containerd[1627]: time="2025-02-13T19:52:20.427346162Z" level=info msg="Start streaming server" Feb 13 19:52:20.427372 containerd[1627]: time="2025-02-13T19:52:20.427368714Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:52:20.427560 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:52:20.428974 containerd[1627]: time="2025-02-13T19:52:20.427690878Z" level=info msg="containerd successfully booted in 1.506983s" Feb 13 19:52:20.941407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:20.943608 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:52:20.947466 systemd[1]: Startup finished in 6.055s (kernel) + 7.726s (userspace) = 13.781s. Feb 13 19:52:20.947572 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:21.414946 kubelet[1699]: E0213 19:52:21.414766 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:21.419291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:21.419596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:25.994243 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:52:26.006045 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:33734.service - OpenSSH per-connection server daemon (10.0.0.1:33734). Feb 13 19:52:26.118324 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 33734 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.120231 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.129046 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:52:26.143625 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:52:26.146357 systemd-logind[1584]: New session 1 of user core. Feb 13 19:52:26.157483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:52:26.165728 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:52:26.169452 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:52:26.282859 systemd[1718]: Queued start job for default target default.target. Feb 13 19:52:26.283343 systemd[1718]: Created slice app.slice - User Application Slice. Feb 13 19:52:26.283370 systemd[1718]: Reached target paths.target - Paths. Feb 13 19:52:26.283386 systemd[1718]: Reached target timers.target - Timers. Feb 13 19:52:26.293312 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:52:26.300040 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:52:26.300152 systemd[1718]: Reached target sockets.target - Sockets. Feb 13 19:52:26.300177 systemd[1718]: Reached target basic.target - Basic System. Feb 13 19:52:26.300290 systemd[1718]: Reached target default.target - Main User Target. Feb 13 19:52:26.300339 systemd[1718]: Startup finished in 124ms. Feb 13 19:52:26.300552 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:52:26.302492 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:52:26.362476 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:33738.service - OpenSSH per-connection server daemon (10.0.0.1:33738). Feb 13 19:52:26.402240 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 33738 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.403854 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.408036 systemd-logind[1584]: New session 2 of user core. Feb 13 19:52:26.413493 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:52:26.465785 sshd[1734]: Connection closed by 10.0.0.1 port 33738 Feb 13 19:52:26.466120 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:26.481643 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:33748.service - OpenSSH per-connection server daemon (10.0.0.1:33748). Feb 13 19:52:26.482350 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:33738.service: Deactivated successfully. Feb 13 19:52:26.484290 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:52:26.485194 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:52:26.486926 systemd-logind[1584]: Removed session 2. Feb 13 19:52:26.520242 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 33748 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.521984 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.526807 systemd-logind[1584]: New session 3 of user core. Feb 13 19:52:26.536862 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:52:26.589835 sshd[1742]: Connection closed by 10.0.0.1 port 33748 Feb 13 19:52:26.590283 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:26.603693 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:33758.service - OpenSSH per-connection server daemon (10.0.0.1:33758). Feb 13 19:52:26.604466 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:33748.service: Deactivated successfully. Feb 13 19:52:26.606868 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:52:26.608525 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:52:26.611374 systemd-logind[1584]: Removed session 3. Feb 13 19:52:26.645740 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 33758 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.647623 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.651832 systemd-logind[1584]: New session 4 of user core. Feb 13 19:52:26.661498 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:52:26.714816 sshd[1750]: Connection closed by 10.0.0.1 port 33758 Feb 13 19:52:26.715149 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:26.727616 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:33762.service - OpenSSH per-connection server daemon (10.0.0.1:33762). Feb 13 19:52:26.728458 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:33758.service: Deactivated successfully. Feb 13 19:52:26.730768 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:52:26.731680 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:52:26.733581 systemd-logind[1584]: Removed session 4. Feb 13 19:52:26.765071 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 33762 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.766462 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.770873 systemd-logind[1584]: New session 5 of user core. Feb 13 19:52:26.780565 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:52:26.842538 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:52:26.842963 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:26.860456 sudo[1759]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:26.862634 sshd[1758]: Connection closed by 10.0.0.1 port 33762 Feb 13 19:52:26.863097 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:26.873513 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:33764.service - OpenSSH per-connection server daemon (10.0.0.1:33764). Feb 13 19:52:26.874021 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:33762.service: Deactivated successfully. Feb 13 19:52:26.877657 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:52:26.878526 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:52:26.879414 systemd-logind[1584]: Removed session 5. Feb 13 19:52:26.914421 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 33764 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:26.916258 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.921308 systemd-logind[1584]: New session 6 of user core. Feb 13 19:52:26.941665 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:52:26.998340 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:52:26.998682 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:27.002180 sudo[1769]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:27.009824 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:52:27.010268 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:27.034623 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:52:27.065559 augenrules[1791]: No rules Feb 13 19:52:27.067783 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:52:27.068268 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:52:27.069699 sudo[1768]: pam_unix(sudo:session): session closed for user root Feb 13 19:52:27.071382 sshd[1767]: Connection closed by 10.0.0.1 port 33764 Feb 13 19:52:27.071743 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:27.081433 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:33770.service - OpenSSH per-connection server daemon (10.0.0.1:33770). Feb 13 19:52:27.081913 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:33764.service: Deactivated successfully. Feb 13 19:52:27.085107 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:52:27.085782 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:52:27.086552 systemd-logind[1584]: Removed session 6. Feb 13 19:52:27.118539 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 33770 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:52:27.120032 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:27.124306 systemd-logind[1584]: New session 7 of user core. Feb 13 19:52:27.133483 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:52:27.188021 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:52:27.188552 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:52:27.480523 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:52:27.480765 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:52:27.753819 dockerd[1825]: time="2025-02-13T19:52:27.753650641Z" level=info msg="Starting up" Feb 13 19:52:28.668466 dockerd[1825]: time="2025-02-13T19:52:28.668376201Z" level=info msg="Loading containers: start." Feb 13 19:52:29.004256 kernel: Initializing XFRM netlink socket Feb 13 19:52:29.106151 systemd-networkd[1245]: docker0: Link UP Feb 13 19:52:29.161448 dockerd[1825]: time="2025-02-13T19:52:29.161388609Z" level=info msg="Loading containers: done." Feb 13 19:52:29.177900 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck719930992-merged.mount: Deactivated successfully. Feb 13 19:52:29.180730 dockerd[1825]: time="2025-02-13T19:52:29.180662808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:52:29.180852 dockerd[1825]: time="2025-02-13T19:52:29.180793694Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:52:29.180968 dockerd[1825]: time="2025-02-13T19:52:29.180941862Z" level=info msg="Daemon has completed initialization" Feb 13 19:52:29.555128 dockerd[1825]: time="2025-02-13T19:52:29.554923367Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:52:29.555263 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:52:30.337953 containerd[1627]: time="2025-02-13T19:52:30.337901200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:52:31.290906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657051368.mount: Deactivated successfully. Feb 13 19:52:31.670160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:52:31.686567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:31.862004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:31.866813 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:32.079777 kubelet[2078]: E0213 19:52:32.079566 2078 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:32.087902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:32.088186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:33.177642 containerd[1627]: time="2025-02-13T19:52:33.177546003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:33.178656 containerd[1627]: time="2025-02-13T19:52:33.178603817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:52:33.179869 containerd[1627]: time="2025-02-13T19:52:33.179822011Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:33.183517 containerd[1627]: time="2025-02-13T19:52:33.183471674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:33.184996 containerd[1627]: time="2025-02-13T19:52:33.184945317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.846999293s" Feb 13 19:52:33.185052 containerd[1627]: time="2025-02-13T19:52:33.185007113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:52:33.210240 containerd[1627]: time="2025-02-13T19:52:33.210175123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:52:35.260710 containerd[1627]: time="2025-02-13T19:52:35.260606842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:35.272194 containerd[1627]: time="2025-02-13T19:52:35.272087638Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:52:35.284271 containerd[1627]: time="2025-02-13T19:52:35.284174010Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:35.288675 containerd[1627]: time="2025-02-13T19:52:35.288612122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:35.289819 containerd[1627]: time="2025-02-13T19:52:35.289770584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.079529428s" Feb 13 19:52:35.289819 containerd[1627]: time="2025-02-13T19:52:35.289810849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:52:35.316513 containerd[1627]: time="2025-02-13T19:52:35.316448315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:52:36.861929 containerd[1627]: time="2025-02-13T19:52:36.861846556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:36.898697 containerd[1627]: time="2025-02-13T19:52:36.898614546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:52:36.946026 containerd[1627]: time="2025-02-13T19:52:36.945969146Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:36.996788 containerd[1627]: time="2025-02-13T19:52:36.996721747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:36.998191 containerd[1627]: time="2025-02-13T19:52:36.998141008Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.681645976s" Feb 13 19:52:36.998191 containerd[1627]: time="2025-02-13T19:52:36.998173399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:52:37.020974 containerd[1627]: time="2025-02-13T19:52:37.020916091Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:52:38.555397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249864731.mount: Deactivated successfully. Feb 13 19:52:39.129149 containerd[1627]: time="2025-02-13T19:52:39.129087111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.129877 containerd[1627]: time="2025-02-13T19:52:39.129848178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:52:39.130941 containerd[1627]: time="2025-02-13T19:52:39.130908396Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.133141 containerd[1627]: time="2025-02-13T19:52:39.133060782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.133814 containerd[1627]: time="2025-02-13T19:52:39.133768268Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.112815759s" Feb 13 19:52:39.133814 containerd[1627]: time="2025-02-13T19:52:39.133805548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:52:39.157076 containerd[1627]: time="2025-02-13T19:52:39.157031978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:52:39.638293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387337521.mount: Deactivated successfully. Feb 13 19:52:42.338382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:52:42.348449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:42.524385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:42.524778 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:52:42.912694 kubelet[2206]: E0213 19:52:42.912636 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:52:42.916701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:52:42.916965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:52:42.989220 containerd[1627]: time="2025-02-13T19:52:42.989148445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:42.990168 containerd[1627]: time="2025-02-13T19:52:42.990117572Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:52:42.993406 containerd[1627]: time="2025-02-13T19:52:42.993356085Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:42.996687 containerd[1627]: time="2025-02-13T19:52:42.996634652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:42.997918 containerd[1627]: time="2025-02-13T19:52:42.997830073Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.840740898s" Feb 13 19:52:42.997918 containerd[1627]: time="2025-02-13T19:52:42.997902710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:52:43.019400 containerd[1627]: time="2025-02-13T19:52:43.019355984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:52:44.060299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238672970.mount: Deactivated successfully. Feb 13 19:52:44.094409 containerd[1627]: time="2025-02-13T19:52:44.093962521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:44.095308 containerd[1627]: time="2025-02-13T19:52:44.095192808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:52:44.096687 containerd[1627]: time="2025-02-13T19:52:44.096636685Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:44.099683 containerd[1627]: time="2025-02-13T19:52:44.099630930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:44.100679 containerd[1627]: time="2025-02-13T19:52:44.100632968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.081233693s" Feb 13 19:52:44.100741 containerd[1627]: time="2025-02-13T19:52:44.100677332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:52:44.126681 containerd[1627]: time="2025-02-13T19:52:44.126638910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:52:45.550933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411324544.mount: Deactivated successfully. Feb 13 19:52:47.066922 containerd[1627]: time="2025-02-13T19:52:47.066856046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:47.067703 containerd[1627]: time="2025-02-13T19:52:47.067654874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:52:47.068757 containerd[1627]: time="2025-02-13T19:52:47.068717266Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:47.071611 containerd[1627]: time="2025-02-13T19:52:47.071579973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:47.072617 containerd[1627]: time="2025-02-13T19:52:47.072586591Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.945897256s" Feb 13 19:52:47.072684 containerd[1627]: time="2025-02-13T19:52:47.072619112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:52:50.022136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:50.032434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:50.050439 systemd[1]: Reloading requested from client PID 2363 ('systemctl') (unit session-7.scope)... Feb 13 19:52:50.050456 systemd[1]: Reloading... Feb 13 19:52:50.134230 zram_generator::config[2405]: No configuration found. Feb 13 19:52:50.416115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:50.496343 systemd[1]: Reloading finished in 445 ms. Feb 13 19:52:50.555379 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:52:50.555482 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:52:50.555831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:50.557827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:50.702190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:50.707015 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:52:50.743005 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:52:50.743005 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:52:50.743005 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:52:50.743418 kubelet[2462]: I0213 19:52:50.743068 2462 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:52:51.351756 kubelet[2462]: I0213 19:52:51.351706 2462 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:52:51.351756 kubelet[2462]: I0213 19:52:51.351732 2462 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:52:51.351964 kubelet[2462]: I0213 19:52:51.351917 2462 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:52:51.366812 kubelet[2462]: I0213 19:52:51.366769 2462 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:52:51.367759 kubelet[2462]: E0213 19:52:51.367713 2462 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.378805 kubelet[2462]: I0213 19:52:51.378777 2462 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:52:51.379595 kubelet[2462]: I0213 19:52:51.379546 2462 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:52:51.379763 kubelet[2462]: I0213 19:52:51.379579 2462 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:52:51.379874 kubelet[2462]: I0213 19:52:51.379776 2462 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:52:51.379874 kubelet[2462]: I0213 19:52:51.379789 2462 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:52:51.379954 kubelet[2462]: I0213 19:52:51.379940 2462 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:52:51.380599 kubelet[2462]: I0213 19:52:51.380565 2462 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:52:51.380648 kubelet[2462]: I0213 19:52:51.380600 2462 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:52:51.380648 kubelet[2462]: I0213 19:52:51.380625 2462 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:52:51.380704 kubelet[2462]: I0213 19:52:51.380653 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:52:51.381256 kubelet[2462]: W0213 19:52:51.381142 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.381256 kubelet[2462]: E0213 19:52:51.381223 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.381256 kubelet[2462]: W0213 19:52:51.381141 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.381256 kubelet[2462]: E0213 19:52:51.381251 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.383694 kubelet[2462]: I0213 19:52:51.383679 2462 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:52:51.384888 kubelet[2462]: I0213 19:52:51.384870 2462 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:52:51.384956 kubelet[2462]: W0213 19:52:51.384916 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:52:51.385825 kubelet[2462]: I0213 19:52:51.385471 2462 server.go:1264] "Started kubelet" Feb 13 19:52:51.386535 kubelet[2462]: I0213 19:52:51.386479 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:52:51.387578 kubelet[2462]: I0213 19:52:51.386587 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:52:51.387578 kubelet[2462]: I0213 19:52:51.386816 2462 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:52:51.387578 kubelet[2462]: I0213 19:52:51.386850 2462 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:52:51.392375 kubelet[2462]: I0213 19:52:51.391477 2462 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:52:51.392375 kubelet[2462]: E0213 19:52:51.392092 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dc8b2b9d87fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:52:51.38545254 +0000 UTC m=+0.674506390,LastTimestamp:2025-02-13 19:52:51.38545254 +0000 UTC m=+0.674506390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:52:51.392375 kubelet[2462]: E0213 19:52:51.392286 2462 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:52:51.392515 kubelet[2462]: I0213 19:52:51.392388 2462 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:52:51.393331 kubelet[2462]: I0213 19:52:51.392640 2462 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:52:51.393331 kubelet[2462]: I0213 19:52:51.392805 2462 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:52:51.393825 kubelet[2462]: W0213 19:52:51.393792 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.393874 kubelet[2462]: E0213 19:52:51.393835 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.394563 kubelet[2462]: E0213 19:52:51.394186 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" Feb 13 19:52:51.395008 kubelet[2462]: I0213 19:52:51.394889 2462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:52:51.396248 kubelet[2462]: E0213 19:52:51.396229 2462 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:52:51.396617 kubelet[2462]: I0213 19:52:51.396601 2462 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:52:51.396685 kubelet[2462]: I0213 19:52:51.396675 2462 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:52:51.411501 kubelet[2462]: I0213 19:52:51.411455 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:52:51.412991 kubelet[2462]: I0213 19:52:51.412947 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:52:51.413039 kubelet[2462]: I0213 19:52:51.413008 2462 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:52:51.413039 kubelet[2462]: I0213 19:52:51.413035 2462 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:52:51.413107 kubelet[2462]: E0213 19:52:51.413083 2462 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:52:51.413564 kubelet[2462]: W0213 19:52:51.413534 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.413615 kubelet[2462]: E0213 19:52:51.413566 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:51.420184 kubelet[2462]: I0213 19:52:51.420154 2462 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:52:51.420184 kubelet[2462]: I0213 19:52:51.420177 2462 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:52:51.420340 kubelet[2462]: I0213 19:52:51.420195 2462 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:52:51.493765 kubelet[2462]: I0213 19:52:51.493720 2462 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:51.494123 kubelet[2462]: E0213 19:52:51.494100 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 13 19:52:51.513409 kubelet[2462]: E0213 19:52:51.513369 2462 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:52:51.595267 kubelet[2462]: E0213 19:52:51.595192 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" Feb 13 19:52:51.695710 kubelet[2462]: I0213 19:52:51.695586 2462 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:51.695908 kubelet[2462]: E0213 19:52:51.695884 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 13 19:52:51.714173 kubelet[2462]: E0213 19:52:51.714143 2462 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:52:51.799821 kubelet[2462]: I0213 19:52:51.799767 2462 policy_none.go:49] "None policy: Start" Feb 13 19:52:51.800577 kubelet[2462]: I0213 19:52:51.800541 2462 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:52:51.800577 kubelet[2462]: I0213 19:52:51.800564 2462 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:52:51.809730 kubelet[2462]: I0213 19:52:51.809708 2462 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:52:51.809952 kubelet[2462]: I0213 19:52:51.809915 2462 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:52:51.810069 kubelet[2462]: I0213 19:52:51.810053 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:52:51.811136 kubelet[2462]: E0213 19:52:51.811117 2462 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:52:51.996843 kubelet[2462]: E0213 19:52:51.996722 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" Feb 13 19:52:52.097297 kubelet[2462]: I0213 19:52:52.097269 2462 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:52.097571 kubelet[2462]: E0213 19:52:52.097549 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 13 19:52:52.114739 kubelet[2462]: I0213 19:52:52.114704 2462 topology_manager.go:215] "Topology Admit Handler" podUID="aaa47cc67d012e7207050967315da299" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:52:52.115556 kubelet[2462]: I0213 19:52:52.115528 2462 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:52:52.116148 kubelet[2462]: I0213 19:52:52.116132 2462 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:52:52.196787 kubelet[2462]: I0213 19:52:52.196733 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:52.196787 kubelet[2462]: I0213 19:52:52.196784 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:52.196941 kubelet[2462]: I0213 19:52:52.196811 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:52.196941 kubelet[2462]: I0213 19:52:52.196833 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:52:52.196941 kubelet[2462]: I0213 19:52:52.196853 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:52.196941 kubelet[2462]: I0213 19:52:52.196872 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:52.196941 kubelet[2462]: I0213 19:52:52.196892 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:52.197077 kubelet[2462]: I0213 19:52:52.196915 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:52.197077 kubelet[2462]: I0213 19:52:52.196934 2462 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:52.350763 kubelet[2462]: W0213 19:52:52.350615 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.350763 kubelet[2462]: E0213 19:52:52.350693 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.420121 kubelet[2462]: E0213 19:52:52.420084 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:52.420625 containerd[1627]: time="2025-02-13T19:52:52.420589372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaa47cc67d012e7207050967315da299,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:52.421666 kubelet[2462]: E0213 19:52:52.421645 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:52.421895 containerd[1627]: time="2025-02-13T19:52:52.421873036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:52.424156 kubelet[2462]: E0213 19:52:52.424137 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:52.424423 containerd[1627]: time="2025-02-13T19:52:52.424394305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:52.572617 kubelet[2462]: W0213 19:52:52.572545 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.572617 kubelet[2462]: E0213 19:52:52.572611 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.608317 kubelet[2462]: W0213 19:52:52.608195 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.608317 kubelet[2462]: E0213 19:52:52.608248 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.663703 kubelet[2462]: W0213 19:52:52.663656 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.663703 kubelet[2462]: E0213 19:52:52.663693 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:52.797639 kubelet[2462]: E0213 19:52:52.797589 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" Feb 13 19:52:52.898745 kubelet[2462]: I0213 19:52:52.898641 2462 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:52.899113 kubelet[2462]: E0213 19:52:52.898965 2462 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Feb 13 19:52:53.369477 kubelet[2462]: E0213 19:52:53.369352 2462 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:53.987732 kubelet[2462]: W0213 19:52:53.987638 2462 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:53.987732 kubelet[2462]: E0213 19:52:53.987680 2462 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Feb 13 19:52:54.070546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900779805.mount: Deactivated successfully. Feb 13 19:52:54.075539 containerd[1627]: time="2025-02-13T19:52:54.075489627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:54.078497 containerd[1627]: time="2025-02-13T19:52:54.078427259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:52:54.079496 containerd[1627]: time="2025-02-13T19:52:54.079452899Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:54.081701 containerd[1627]: time="2025-02-13T19:52:54.081647585Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:54.083260 containerd[1627]: time="2025-02-13T19:52:54.083225977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:52:54.084461 containerd[1627]: time="2025-02-13T19:52:54.084417676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:54.085613 containerd[1627]: time="2025-02-13T19:52:54.085532968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:52:54.087735 containerd[1627]: time="2025-02-13T19:52:54.087685934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:54.088590 containerd[1627]: time="2025-02-13T19:52:54.088562999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.667888693s" Feb 13 19:52:54.091736 containerd[1627]: time="2025-02-13T19:52:54.091709132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.669779967s" Feb 13 19:52:54.092525 containerd[1627]: time="2025-02-13T19:52:54.092493208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.668033036s" Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.198515093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.198578264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.198595387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.198673477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.199247500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.199458685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:54.199492 containerd[1627]: time="2025-02-13T19:52:54.199482470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.200175 containerd[1627]: time="2025-02-13T19:52:54.197681432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:54.200175 containerd[1627]: time="2025-02-13T19:52:54.199989896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:54.200175 containerd[1627]: time="2025-02-13T19:52:54.200012579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.200318 containerd[1627]: time="2025-02-13T19:52:54.200258221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.200942 containerd[1627]: time="2025-02-13T19:52:54.200740728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:54.261983 containerd[1627]: time="2025-02-13T19:52:54.261875750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"b14563a76b30ece3bca803c6944339ebf5e9db220e3ad9d65e243ca1706553d5\"" Feb 13 19:52:54.263932 kubelet[2462]: E0213 19:52:54.263912 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.266587 containerd[1627]: time="2025-02-13T19:52:54.266552514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aaa47cc67d012e7207050967315da299,Namespace:kube-system,Attempt:0,} returns sandbox id \"17da4ded2c854aa2aa2194bd25f420fc685ad25fce017c16600e292d0e3a71d1\"" Feb 13 19:52:54.267163 kubelet[2462]: E0213 19:52:54.267135 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.268063 containerd[1627]: time="2025-02-13T19:52:54.268025183Z" level=info msg="CreateContainer within sandbox \"b14563a76b30ece3bca803c6944339ebf5e9db220e3ad9d65e243ca1706553d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:52:54.270034 containerd[1627]: time="2025-02-13T19:52:54.270003433Z" level=info msg="CreateContainer within sandbox \"17da4ded2c854aa2aa2194bd25f420fc685ad25fce017c16600e292d0e3a71d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:52:54.274758 containerd[1627]: time="2025-02-13T19:52:54.274727417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"853f9abd0beacf04bd1afc123eb5209dfea7d2694e19ba0a1b1246b2663376de\"" Feb 13 19:52:54.275280 kubelet[2462]: E0213 19:52:54.275249 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.277223 containerd[1627]: time="2025-02-13T19:52:54.277121887Z" level=info msg="CreateContainer within sandbox \"853f9abd0beacf04bd1afc123eb5209dfea7d2694e19ba0a1b1246b2663376de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:52:54.296061 containerd[1627]: time="2025-02-13T19:52:54.296002765Z" level=info msg="CreateContainer within sandbox \"b14563a76b30ece3bca803c6944339ebf5e9db220e3ad9d65e243ca1706553d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c513a4b1e38ee0e1cd4082240f99841bc1e7a71c0004da8cbbc8d13fd5cd8c88\"" Feb 13 19:52:54.296707 containerd[1627]: time="2025-02-13T19:52:54.296671549Z" level=info msg="StartContainer for \"c513a4b1e38ee0e1cd4082240f99841bc1e7a71c0004da8cbbc8d13fd5cd8c88\"" Feb 13 19:52:54.303133 containerd[1627]: time="2025-02-13T19:52:54.303023049Z" level=info msg="CreateContainer within sandbox \"17da4ded2c854aa2aa2194bd25f420fc685ad25fce017c16600e292d0e3a71d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ef46300c56bb8ff06469902e8259eac655f4cb502b202e5fcb9e7ba9f23c4b0c\"" Feb 13 19:52:54.304056 containerd[1627]: time="2025-02-13T19:52:54.304013542Z" level=info msg="StartContainer for \"ef46300c56bb8ff06469902e8259eac655f4cb502b202e5fcb9e7ba9f23c4b0c\"" Feb 13 19:52:54.313031 containerd[1627]: time="2025-02-13T19:52:54.312981037Z" level=info msg="CreateContainer within sandbox \"853f9abd0beacf04bd1afc123eb5209dfea7d2694e19ba0a1b1246b2663376de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"376082d30921b69443a023d0b6620c799a4ffa7c82c7a232a981754184de2730\"" Feb 13 19:52:54.313516 containerd[1627]: time="2025-02-13T19:52:54.313492339Z" level=info msg="StartContainer for \"376082d30921b69443a023d0b6620c799a4ffa7c82c7a232a981754184de2730\"" Feb 13 19:52:54.375012 containerd[1627]: time="2025-02-13T19:52:54.374911097Z" level=info msg="StartContainer for \"c513a4b1e38ee0e1cd4082240f99841bc1e7a71c0004da8cbbc8d13fd5cd8c88\" returns successfully" Feb 13 19:52:54.387563 containerd[1627]: time="2025-02-13T19:52:54.387497865Z" level=info msg="StartContainer for \"ef46300c56bb8ff06469902e8259eac655f4cb502b202e5fcb9e7ba9f23c4b0c\" returns successfully" Feb 13 19:52:54.397373 containerd[1627]: time="2025-02-13T19:52:54.397328929Z" level=info msg="StartContainer for \"376082d30921b69443a023d0b6620c799a4ffa7c82c7a232a981754184de2730\" returns successfully" Feb 13 19:52:54.398226 kubelet[2462]: E0213 19:52:54.398166 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="3.2s" Feb 13 19:52:54.424872 kubelet[2462]: E0213 19:52:54.424749 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.427254 kubelet[2462]: E0213 19:52:54.426953 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.430624 kubelet[2462]: E0213 19:52:54.430588 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:54.502306 kubelet[2462]: I0213 19:52:54.502273 2462 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:55.431833 kubelet[2462]: E0213 19:52:55.431796 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:55.463731 kubelet[2462]: E0213 19:52:55.463020 2462 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:55.600177 kubelet[2462]: E0213 19:52:55.600040 2462 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc8b2b9d87fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:52:51.38545254 +0000 UTC m=+0.674506390,LastTimestamp:2025-02-13 19:52:51.38545254 +0000 UTC m=+0.674506390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:52:55.648279 kubelet[2462]: I0213 19:52:55.648241 2462 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:52:56.383005 kubelet[2462]: I0213 19:52:56.382946 2462 apiserver.go:52] "Watching apiserver" Feb 13 19:52:56.393058 kubelet[2462]: I0213 19:52:56.393023 2462 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:52:58.138039 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-7.scope)... Feb 13 19:52:58.138056 systemd[1]: Reloading... Feb 13 19:52:58.210238 zram_generator::config[2785]: No configuration found. Feb 13 19:52:58.329782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:52:58.408429 systemd[1]: Reloading finished in 269 ms. Feb 13 19:52:58.442252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:58.458473 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:52:58.458878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:58.470390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:52:58.609714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:52:58.614706 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:52:58.674007 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:52:58.674007 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:52:58.674007 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:52:58.674484 kubelet[2837]: I0213 19:52:58.673974 2837 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:52:58.679427 kubelet[2837]: I0213 19:52:58.679389 2837 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:52:58.679427 kubelet[2837]: I0213 19:52:58.679419 2837 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:52:58.681059 kubelet[2837]: I0213 19:52:58.679996 2837 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:52:58.682769 kubelet[2837]: I0213 19:52:58.682746 2837 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:52:58.684170 kubelet[2837]: I0213 19:52:58.684000 2837 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:52:58.692430 kubelet[2837]: I0213 19:52:58.692355 2837 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:52:58.693071 kubelet[2837]: I0213 19:52:58.693024 2837 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:52:58.693325 kubelet[2837]: I0213 19:52:58.693061 2837 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:52:58.693438 kubelet[2837]: I0213 19:52:58.693339 2837 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:52:58.693438 kubelet[2837]: I0213 19:52:58.693353 2837 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:52:58.693438 kubelet[2837]: I0213 19:52:58.693408 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:52:58.693551 kubelet[2837]: I0213 19:52:58.693535 2837 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:52:58.693594 kubelet[2837]: I0213 19:52:58.693552 2837 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:52:58.694885 kubelet[2837]: I0213 19:52:58.694416 2837 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:52:58.694885 kubelet[2837]: I0213 19:52:58.694471 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:52:58.696694 kubelet[2837]: I0213 19:52:58.696651 2837 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:52:58.698350 kubelet[2837]: I0213 19:52:58.696867 2837 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:52:58.698350 kubelet[2837]: I0213 19:52:58.697942 2837 server.go:1264] "Started kubelet" Feb 13 19:52:58.702752 kubelet[2837]: I0213 19:52:58.702439 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:52:58.702831 kubelet[2837]: I0213 19:52:58.702771 2837 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:52:58.704622 kubelet[2837]: I0213 19:52:58.704571 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:52:58.709479 kubelet[2837]: I0213 19:52:58.706857 2837 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:52:58.709479 kubelet[2837]: I0213 19:52:58.707671 2837 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:52:58.709479 kubelet[2837]: I0213 19:52:58.708020 2837 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:52:58.709479 kubelet[2837]: I0213 19:52:58.708793 2837 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:52:58.709479 kubelet[2837]: I0213 19:52:58.708917 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:52:58.709709 kubelet[2837]: I0213 19:52:58.709696 2837 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:52:58.710536 kubelet[2837]: I0213 19:52:58.710496 2837 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:52:58.715411 kubelet[2837]: I0213 19:52:58.715382 2837 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:52:58.715609 kubelet[2837]: E0213 19:52:58.715591 2837 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:52:58.726985 kubelet[2837]: I0213 19:52:58.726932 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:52:58.728967 kubelet[2837]: I0213 19:52:58.728930 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:52:58.728967 kubelet[2837]: I0213 19:52:58.728961 2837 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:52:58.729065 kubelet[2837]: I0213 19:52:58.728987 2837 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:52:58.729065 kubelet[2837]: E0213 19:52:58.729038 2837 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:52:58.765912 kubelet[2837]: I0213 19:52:58.765884 2837 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:52:58.765912 kubelet[2837]: I0213 19:52:58.765906 2837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:52:58.766055 kubelet[2837]: I0213 19:52:58.765926 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:52:58.766082 kubelet[2837]: I0213 19:52:58.766066 2837 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:52:58.766120 kubelet[2837]: I0213 19:52:58.766076 2837 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:52:58.766120 kubelet[2837]: I0213 19:52:58.766092 2837 policy_none.go:49] "None policy: Start" Feb 13 19:52:58.766596 kubelet[2837]: I0213 19:52:58.766582 2837 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:52:58.766638 kubelet[2837]: I0213 19:52:58.766601 2837 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:52:58.766712 kubelet[2837]: I0213 19:52:58.766702 2837 state_mem.go:75] "Updated machine memory state" Feb 13 19:52:58.769130 kubelet[2837]: I0213 19:52:58.768134 2837 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:52:58.769130 kubelet[2837]: I0213 19:52:58.768300 2837 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:52:58.769130 kubelet[2837]: I0213 19:52:58.768375 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:52:58.812187 kubelet[2837]: I0213 19:52:58.812161 2837 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:52:58.819238 kubelet[2837]: I0213 19:52:58.819218 2837 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:52:58.819323 kubelet[2837]: I0213 19:52:58.819287 2837 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:52:58.829324 kubelet[2837]: I0213 19:52:58.829284 2837 topology_manager.go:215] "Topology Admit Handler" podUID="aaa47cc67d012e7207050967315da299" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:52:58.829441 kubelet[2837]: I0213 19:52:58.829382 2837 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:52:58.829465 kubelet[2837]: I0213 19:52:58.829457 2837 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:52:58.909735 kubelet[2837]: I0213 19:52:58.909679 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:58.909735 kubelet[2837]: I0213 19:52:58.909722 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:58.909884 kubelet[2837]: I0213 19:52:58.909750 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:58.909884 kubelet[2837]: I0213 19:52:58.909769 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa47cc67d012e7207050967315da299-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aaa47cc67d012e7207050967315da299\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:58.909884 kubelet[2837]: I0213 19:52:58.909832 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:58.909884 kubelet[2837]: I0213 19:52:58.909855 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:58.910031 kubelet[2837]: I0213 19:52:58.909919 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:58.910031 kubelet[2837]: I0213 19:52:58.909965 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:52:58.910031 kubelet[2837]: I0213 19:52:58.909989 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:52:59.139625 kubelet[2837]: E0213 19:52:59.139597 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.140053 kubelet[2837]: E0213 19:52:59.139987 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.140126 kubelet[2837]: E0213 19:52:59.140077 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.695469 kubelet[2837]: I0213 19:52:59.695403 2837 apiserver.go:52] "Watching apiserver" Feb 13 19:52:59.708533 kubelet[2837]: I0213 19:52:59.708493 2837 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:52:59.741602 kubelet[2837]: E0213 19:52:59.741561 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.742449 kubelet[2837]: E0213 19:52:59.742417 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.846143 kubelet[2837]: E0213 19:52:59.846029 2837 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:52:59.846591 kubelet[2837]: E0213 19:52:59.846511 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:59.846852 kubelet[2837]: I0213 19:52:59.846796 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.846786882 podStartE2EDuration="1.846786882s" podCreationTimestamp="2025-02-13 19:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:59.846041569 +0000 UTC m=+1.226735168" watchObservedRunningTime="2025-02-13 19:52:59.846786882 +0000 UTC m=+1.227480481" Feb 13 19:53:00.002476 kubelet[2837]: I0213 19:53:00.002283 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.002267742 podStartE2EDuration="2.002267742s" podCreationTimestamp="2025-02-13 19:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:00.001925429 +0000 UTC m=+1.382619028" watchObservedRunningTime="2025-02-13 19:53:00.002267742 +0000 UTC m=+1.382961341" Feb 13 19:53:00.035422 kubelet[2837]: I0213 19:53:00.035347 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.035321276 podStartE2EDuration="2.035321276s" podCreationTimestamp="2025-02-13 19:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:00.035251844 +0000 UTC m=+1.415945453" watchObservedRunningTime="2025-02-13 19:53:00.035321276 +0000 UTC m=+1.416014875" Feb 13 19:53:00.745511 kubelet[2837]: E0213 19:53:00.745468 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:02.188755 update_engine[1585]: I20250213 19:53:02.188642 1585 update_attempter.cc:509] Updating boot flags... Feb 13 19:53:02.229232 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2911) Feb 13 19:53:02.281277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2911) Feb 13 19:53:02.309546 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2911) Feb 13 19:53:03.135218 kubelet[2837]: E0213 19:53:03.135139 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:03.350776 kubelet[2837]: E0213 19:53:03.350728 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:03.535185 kubelet[2837]: E0213 19:53:03.535123 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:03.823023 sudo[1804]: pam_unix(sudo:session): session closed for user root Feb 13 19:53:03.824946 sshd[1803]: Connection closed by 10.0.0.1 port 33770 Feb 13 19:53:03.825454 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:03.830481 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:33770.service: Deactivated successfully. Feb 13 19:53:03.833024 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:53:03.833075 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:53:03.834793 systemd-logind[1584]: Removed session 7. Feb 13 19:53:12.469551 kubelet[2837]: I0213 19:53:12.469514 2837 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:53:12.470144 containerd[1627]: time="2025-02-13T19:53:12.470110806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:53:12.470440 kubelet[2837]: I0213 19:53:12.470338 2837 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:53:13.139652 kubelet[2837]: E0213 19:53:13.139619 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:13.194409 kubelet[2837]: I0213 19:53:13.194368 2837 topology_manager.go:215] "Topology Admit Handler" podUID="b05e3884-450b-4f87-8759-98188b4542d7" podNamespace="kube-system" podName="kube-proxy-4lpwd" Feb 13 19:53:13.354850 kubelet[2837]: E0213 19:53:13.354813 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:13.388681 kubelet[2837]: I0213 19:53:13.388624 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b05e3884-450b-4f87-8759-98188b4542d7-kube-proxy\") pod \"kube-proxy-4lpwd\" (UID: \"b05e3884-450b-4f87-8759-98188b4542d7\") " pod="kube-system/kube-proxy-4lpwd" Feb 13 19:53:13.388681 kubelet[2837]: I0213 19:53:13.388670 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b05e3884-450b-4f87-8759-98188b4542d7-xtables-lock\") pod \"kube-proxy-4lpwd\" (UID: \"b05e3884-450b-4f87-8759-98188b4542d7\") " pod="kube-system/kube-proxy-4lpwd" Feb 13 19:53:13.388883 kubelet[2837]: I0213 19:53:13.388730 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b05e3884-450b-4f87-8759-98188b4542d7-lib-modules\") pod \"kube-proxy-4lpwd\" (UID: \"b05e3884-450b-4f87-8759-98188b4542d7\") " pod="kube-system/kube-proxy-4lpwd" Feb 13 19:53:13.388883 kubelet[2837]: I0213 19:53:13.388753 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6brg\" (UniqueName: \"kubernetes.io/projected/b05e3884-450b-4f87-8759-98188b4542d7-kube-api-access-r6brg\") pod \"kube-proxy-4lpwd\" (UID: \"b05e3884-450b-4f87-8759-98188b4542d7\") " pod="kube-system/kube-proxy-4lpwd" Feb 13 19:53:13.511828 kubelet[2837]: I0213 19:53:13.510929 2837 topology_manager.go:215] "Topology Admit Handler" podUID="4ca6320c-8355-460a-bf0a-302d1e633447" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-dthnq" Feb 13 19:53:13.540296 kubelet[2837]: E0213 19:53:13.540238 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:13.690930 kubelet[2837]: I0213 19:53:13.690858 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktcs9\" (UniqueName: \"kubernetes.io/projected/4ca6320c-8355-460a-bf0a-302d1e633447-kube-api-access-ktcs9\") pod \"tigera-operator-7bc55997bb-dthnq\" (UID: \"4ca6320c-8355-460a-bf0a-302d1e633447\") " pod="tigera-operator/tigera-operator-7bc55997bb-dthnq" Feb 13 19:53:13.690930 kubelet[2837]: I0213 19:53:13.690909 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ca6320c-8355-460a-bf0a-302d1e633447-var-lib-calico\") pod \"tigera-operator-7bc55997bb-dthnq\" (UID: \"4ca6320c-8355-460a-bf0a-302d1e633447\") " pod="tigera-operator/tigera-operator-7bc55997bb-dthnq" Feb 13 19:53:13.799260 kubelet[2837]: E0213 19:53:13.798634 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:13.800056 containerd[1627]: time="2025-02-13T19:53:13.800024118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lpwd,Uid:b05e3884-450b-4f87-8759-98188b4542d7,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:13.817613 containerd[1627]: time="2025-02-13T19:53:13.817564573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dthnq,Uid:4ca6320c-8355-460a-bf0a-302d1e633447,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:53:13.825949 containerd[1627]: time="2025-02-13T19:53:13.825316833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:13.825949 containerd[1627]: time="2025-02-13T19:53:13.825915264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:13.825949 containerd[1627]: time="2025-02-13T19:53:13.825928809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:13.826124 containerd[1627]: time="2025-02-13T19:53:13.826036432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:13.847788 containerd[1627]: time="2025-02-13T19:53:13.847477172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:13.847788 containerd[1627]: time="2025-02-13T19:53:13.847540361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:13.847788 containerd[1627]: time="2025-02-13T19:53:13.847563645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:13.847788 containerd[1627]: time="2025-02-13T19:53:13.847699922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:13.869187 containerd[1627]: time="2025-02-13T19:53:13.869148628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4lpwd,Uid:b05e3884-450b-4f87-8759-98188b4542d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"32c183899b81d442dcee3ee8c779b2d8638c44c7712e4bee9c2b38a5624fabd8\"" Feb 13 19:53:13.869941 kubelet[2837]: E0213 19:53:13.869848 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:13.872771 containerd[1627]: time="2025-02-13T19:53:13.872729991Z" level=info msg="CreateContainer within sandbox \"32c183899b81d442dcee3ee8c779b2d8638c44c7712e4bee9c2b38a5624fabd8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:53:13.890723 containerd[1627]: time="2025-02-13T19:53:13.890604186Z" level=info msg="CreateContainer within sandbox \"32c183899b81d442dcee3ee8c779b2d8638c44c7712e4bee9c2b38a5624fabd8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1702ac08b06966ec7c6c042399e7e55b744b72bfb73af045db7d04d09b7ab72d\"" Feb 13 19:53:13.893022 containerd[1627]: time="2025-02-13T19:53:13.892982046Z" level=info msg="StartContainer for \"1702ac08b06966ec7c6c042399e7e55b744b72bfb73af045db7d04d09b7ab72d\"" Feb 13 19:53:13.901918 containerd[1627]: time="2025-02-13T19:53:13.901871363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dthnq,Uid:4ca6320c-8355-460a-bf0a-302d1e633447,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"67c2c180d06fd905d5a126a637d82122129abc9ec6468eac1acc0493cdd3577a\"" Feb 13 19:53:13.904449 containerd[1627]: time="2025-02-13T19:53:13.904417341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:53:13.967535 containerd[1627]: time="2025-02-13T19:53:13.967481977Z" level=info msg="StartContainer for \"1702ac08b06966ec7c6c042399e7e55b744b72bfb73af045db7d04d09b7ab72d\" returns successfully" Feb 13 19:53:14.771428 kubelet[2837]: E0213 19:53:14.771396 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:16.916514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069681603.mount: Deactivated successfully. Feb 13 19:53:18.000091 containerd[1627]: time="2025-02-13T19:53:18.000035909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:18.100215 containerd[1627]: time="2025-02-13T19:53:18.100135733Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:53:18.159753 containerd[1627]: time="2025-02-13T19:53:18.159692663Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:18.189955 containerd[1627]: time="2025-02-13T19:53:18.189875039Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:18.190823 containerd[1627]: time="2025-02-13T19:53:18.190770327Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.28631765s" Feb 13 19:53:18.190823 containerd[1627]: time="2025-02-13T19:53:18.190818687Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:53:18.192981 containerd[1627]: time="2025-02-13T19:53:18.192952529Z" level=info msg="CreateContainer within sandbox \"67c2c180d06fd905d5a126a637d82122129abc9ec6468eac1acc0493cdd3577a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:53:18.523126 containerd[1627]: time="2025-02-13T19:53:18.523069677Z" level=info msg="CreateContainer within sandbox \"67c2c180d06fd905d5a126a637d82122129abc9ec6468eac1acc0493cdd3577a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"248f5f503b4a449879827a8b08a98c6a17fb140abb54761861459c5017b09b50\"" Feb 13 19:53:18.523452 containerd[1627]: time="2025-02-13T19:53:18.523415598Z" level=info msg="StartContainer for \"248f5f503b4a449879827a8b08a98c6a17fb140abb54761861459c5017b09b50\"" Feb 13 19:53:19.395382 containerd[1627]: time="2025-02-13T19:53:19.395332551Z" level=info msg="StartContainer for \"248f5f503b4a449879827a8b08a98c6a17fb140abb54761861459c5017b09b50\" returns successfully" Feb 13 19:53:20.407610 kubelet[2837]: I0213 19:53:20.407551 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4lpwd" podStartSLOduration=7.407534457 podStartE2EDuration="7.407534457s" podCreationTimestamp="2025-02-13 19:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:14.986583458 +0000 UTC m=+16.367277057" watchObservedRunningTime="2025-02-13 19:53:20.407534457 +0000 UTC m=+21.788228056" Feb 13 19:53:20.408286 kubelet[2837]: I0213 19:53:20.407652 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-dthnq" podStartSLOduration=3.118985257 podStartE2EDuration="7.407646087s" podCreationTimestamp="2025-02-13 19:53:13 +0000 UTC" firstStartedPulling="2025-02-13 19:53:13.90294987 +0000 UTC m=+15.283643469" lastFinishedPulling="2025-02-13 19:53:18.1916107 +0000 UTC m=+19.572304299" observedRunningTime="2025-02-13 19:53:20.407326817 +0000 UTC m=+21.788020416" watchObservedRunningTime="2025-02-13 19:53:20.407646087 +0000 UTC m=+21.788339677" Feb 13 19:53:21.544891 kubelet[2837]: I0213 19:53:21.544843 2837 topology_manager.go:215] "Topology Admit Handler" podUID="ad96c9ac-87c1-4010-8c12-b4122c31a458" podNamespace="calico-system" podName="calico-typha-6b77b67d49-kwdxm" Feb 13 19:53:21.631045 kubelet[2837]: I0213 19:53:21.630992 2837 topology_manager.go:215] "Topology Admit Handler" podUID="ea39717d-8aea-4122-aadd-135b8020c8b2" podNamespace="calico-system" podName="calico-node-jdzdm" Feb 13 19:53:21.743394 kubelet[2837]: I0213 19:53:21.743337 2837 topology_manager.go:215] "Topology Admit Handler" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" podNamespace="calico-system" podName="csi-node-driver-jzmps" Feb 13 19:53:21.743721 kubelet[2837]: E0213 19:53:21.743690 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:21.746759 kubelet[2837]: I0213 19:53:21.746369 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-xtables-lock\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746759 kubelet[2837]: I0213 19:53:21.746417 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56m4n\" (UniqueName: \"kubernetes.io/projected/ea39717d-8aea-4122-aadd-135b8020c8b2-kube-api-access-56m4n\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746759 kubelet[2837]: I0213 19:53:21.746440 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-cni-bin-dir\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746759 kubelet[2837]: I0213 19:53:21.746459 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-lib-modules\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746759 kubelet[2837]: I0213 19:53:21.746479 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-cni-log-dir\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746986 kubelet[2837]: I0213 19:53:21.746495 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea39717d-8aea-4122-aadd-135b8020c8b2-tigera-ca-bundle\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746986 kubelet[2837]: I0213 19:53:21.746512 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-var-run-calico\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746986 kubelet[2837]: I0213 19:53:21.746527 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ad96c9ac-87c1-4010-8c12-b4122c31a458-typha-certs\") pod \"calico-typha-6b77b67d49-kwdxm\" (UID: \"ad96c9ac-87c1-4010-8c12-b4122c31a458\") " pod="calico-system/calico-typha-6b77b67d49-kwdxm" Feb 13 19:53:21.746986 kubelet[2837]: I0213 19:53:21.746545 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-policysync\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.746986 kubelet[2837]: I0213 19:53:21.746559 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea39717d-8aea-4122-aadd-135b8020c8b2-node-certs\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.747136 kubelet[2837]: I0213 19:53:21.746603 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-var-lib-calico\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.747136 kubelet[2837]: I0213 19:53:21.746619 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad96c9ac-87c1-4010-8c12-b4122c31a458-tigera-ca-bundle\") pod \"calico-typha-6b77b67d49-kwdxm\" (UID: \"ad96c9ac-87c1-4010-8c12-b4122c31a458\") " pod="calico-system/calico-typha-6b77b67d49-kwdxm" Feb 13 19:53:21.747136 kubelet[2837]: I0213 19:53:21.746647 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-cni-net-dir\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.747136 kubelet[2837]: I0213 19:53:21.746663 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfsn\" (UniqueName: \"kubernetes.io/projected/ad96c9ac-87c1-4010-8c12-b4122c31a458-kube-api-access-ddfsn\") pod \"calico-typha-6b77b67d49-kwdxm\" (UID: \"ad96c9ac-87c1-4010-8c12-b4122c31a458\") " pod="calico-system/calico-typha-6b77b67d49-kwdxm" Feb 13 19:53:21.747136 kubelet[2837]: I0213 19:53:21.746678 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea39717d-8aea-4122-aadd-135b8020c8b2-flexvol-driver-host\") pod \"calico-node-jdzdm\" (UID: \"ea39717d-8aea-4122-aadd-135b8020c8b2\") " pod="calico-system/calico-node-jdzdm" Feb 13 19:53:21.848326 kubelet[2837]: I0213 19:53:21.847493 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b48fef47-3cfd-4e49-87b0-9de0481fb342-varrun\") pod \"csi-node-driver-jzmps\" (UID: \"b48fef47-3cfd-4e49-87b0-9de0481fb342\") " pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:21.848326 kubelet[2837]: I0213 19:53:21.847561 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b48fef47-3cfd-4e49-87b0-9de0481fb342-registration-dir\") pod \"csi-node-driver-jzmps\" (UID: \"b48fef47-3cfd-4e49-87b0-9de0481fb342\") " pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:21.848326 kubelet[2837]: I0213 19:53:21.847628 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b48fef47-3cfd-4e49-87b0-9de0481fb342-kubelet-dir\") pod \"csi-node-driver-jzmps\" (UID: \"b48fef47-3cfd-4e49-87b0-9de0481fb342\") " pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:21.848326 kubelet[2837]: I0213 19:53:21.847651 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b48fef47-3cfd-4e49-87b0-9de0481fb342-socket-dir\") pod \"csi-node-driver-jzmps\" (UID: \"b48fef47-3cfd-4e49-87b0-9de0481fb342\") " pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:21.848326 kubelet[2837]: I0213 19:53:21.847674 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgq7h\" (UniqueName: \"kubernetes.io/projected/b48fef47-3cfd-4e49-87b0-9de0481fb342-kube-api-access-jgq7h\") pod \"csi-node-driver-jzmps\" (UID: \"b48fef47-3cfd-4e49-87b0-9de0481fb342\") " pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:21.850057 kubelet[2837]: E0213 19:53:21.850026 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.850057 kubelet[2837]: W0213 19:53:21.850052 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.850180 kubelet[2837]: E0213 19:53:21.850071 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.851233 kubelet[2837]: E0213 19:53:21.851042 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.851233 kubelet[2837]: W0213 19:53:21.851061 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.851233 kubelet[2837]: E0213 19:53:21.851080 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.851654 kubelet[2837]: E0213 19:53:21.851560 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.851654 kubelet[2837]: W0213 19:53:21.851572 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.851654 kubelet[2837]: E0213 19:53:21.851586 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.852017 kubelet[2837]: E0213 19:53:21.851908 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.852017 kubelet[2837]: W0213 19:53:21.851918 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.852017 kubelet[2837]: E0213 19:53:21.851931 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.852491 kubelet[2837]: E0213 19:53:21.852372 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.852491 kubelet[2837]: W0213 19:53:21.852382 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.852491 kubelet[2837]: E0213 19:53:21.852392 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.852706 kubelet[2837]: E0213 19:53:21.852696 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.852768 kubelet[2837]: W0213 19:53:21.852744 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.852984 kubelet[2837]: E0213 19:53:21.852807 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.853129 kubelet[2837]: E0213 19:53:21.853071 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.853129 kubelet[2837]: W0213 19:53:21.853082 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.853129 kubelet[2837]: E0213 19:53:21.853095 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.853486 kubelet[2837]: E0213 19:53:21.853395 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.853486 kubelet[2837]: W0213 19:53:21.853405 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.853732 kubelet[2837]: E0213 19:53:21.853708 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.853787 kubelet[2837]: E0213 19:53:21.853760 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.853787 kubelet[2837]: W0213 19:53:21.853766 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854049 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854171 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855284 kubelet[2837]: W0213 19:53:21.854179 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854282 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854420 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855284 kubelet[2837]: W0213 19:53:21.854426 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854502 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854630 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855284 kubelet[2837]: W0213 19:53:21.854637 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855284 kubelet[2837]: E0213 19:53:21.854721 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.854829 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855525 kubelet[2837]: W0213 19:53:21.854835 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.854856 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.855104 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855525 kubelet[2837]: W0213 19:53:21.855112 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.855131 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.855329 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855525 kubelet[2837]: W0213 19:53:21.855336 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.855355 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855525 kubelet[2837]: E0213 19:53:21.855527 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855738 kubelet[2837]: W0213 19:53:21.855535 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855738 kubelet[2837]: E0213 19:53:21.855545 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.855794 kubelet[2837]: E0213 19:53:21.855777 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.855794 kubelet[2837]: W0213 19:53:21.855788 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.855914 kubelet[2837]: E0213 19:53:21.855808 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.856009 kubelet[2837]: E0213 19:53:21.855996 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.856009 kubelet[2837]: W0213 19:53:21.856007 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.856009 kubelet[2837]: E0213 19:53:21.856015 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.856191 kubelet[2837]: E0213 19:53:21.856178 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.856191 kubelet[2837]: W0213 19:53:21.856190 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.856275 kubelet[2837]: E0213 19:53:21.856209 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.857035 kubelet[2837]: E0213 19:53:21.857016 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.857035 kubelet[2837]: W0213 19:53:21.857030 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.857095 kubelet[2837]: E0213 19:53:21.857038 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.857928 kubelet[2837]: E0213 19:53:21.857868 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.857928 kubelet[2837]: W0213 19:53:21.857895 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.857928 kubelet[2837]: E0213 19:53:21.857903 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.860070 kubelet[2837]: E0213 19:53:21.860047 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.860126 kubelet[2837]: W0213 19:53:21.860069 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.860126 kubelet[2837]: E0213 19:53:21.860088 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.936097 kubelet[2837]: E0213 19:53:21.936067 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:21.936556 containerd[1627]: time="2025-02-13T19:53:21.936507980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jdzdm,Uid:ea39717d-8aea-4122-aadd-135b8020c8b2,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:21.949417 kubelet[2837]: E0213 19:53:21.948850 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.949417 kubelet[2837]: W0213 19:53:21.948901 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.949417 kubelet[2837]: E0213 19:53:21.948923 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.949417 kubelet[2837]: E0213 19:53:21.949363 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.949417 kubelet[2837]: W0213 19:53:21.949377 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.949417 kubelet[2837]: E0213 19:53:21.949395 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.949868 kubelet[2837]: E0213 19:53:21.949754 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.949868 kubelet[2837]: W0213 19:53:21.949767 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.949868 kubelet[2837]: E0213 19:53:21.949809 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.950221 kubelet[2837]: E0213 19:53:21.950110 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.950221 kubelet[2837]: W0213 19:53:21.950121 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.950221 kubelet[2837]: E0213 19:53:21.950135 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.950455 kubelet[2837]: E0213 19:53:21.950438 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.950455 kubelet[2837]: W0213 19:53:21.950453 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.950597 kubelet[2837]: E0213 19:53:21.950471 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.950748 kubelet[2837]: E0213 19:53:21.950731 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.950748 kubelet[2837]: W0213 19:53:21.950745 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.950876 kubelet[2837]: E0213 19:53:21.950848 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.951145 kubelet[2837]: E0213 19:53:21.951048 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.951145 kubelet[2837]: W0213 19:53:21.951059 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.951240 kubelet[2837]: E0213 19:53:21.951150 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.951390 kubelet[2837]: E0213 19:53:21.951349 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.951441 kubelet[2837]: W0213 19:53:21.951390 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.951441 kubelet[2837]: E0213 19:53:21.951417 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.951796 kubelet[2837]: E0213 19:53:21.951672 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.951796 kubelet[2837]: W0213 19:53:21.951687 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.951796 kubelet[2837]: E0213 19:53:21.951703 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.951963 kubelet[2837]: E0213 19:53:21.951946 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.951963 kubelet[2837]: W0213 19:53:21.951961 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.952091 kubelet[2837]: E0213 19:53:21.951990 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.952334 kubelet[2837]: E0213 19:53:21.952317 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.952334 kubelet[2837]: W0213 19:53:21.952331 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.952417 kubelet[2837]: E0213 19:53:21.952358 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.952639 kubelet[2837]: E0213 19:53:21.952622 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.952639 kubelet[2837]: W0213 19:53:21.952637 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.952726 kubelet[2837]: E0213 19:53:21.952662 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.953024 kubelet[2837]: E0213 19:53:21.953009 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.953024 kubelet[2837]: W0213 19:53:21.953022 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.953117 kubelet[2837]: E0213 19:53:21.953053 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.953544 kubelet[2837]: E0213 19:53:21.953528 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.953544 kubelet[2837]: W0213 19:53:21.953541 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.953682 kubelet[2837]: E0213 19:53:21.953607 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.953798 kubelet[2837]: E0213 19:53:21.953783 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.953798 kubelet[2837]: W0213 19:53:21.953796 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.953931 kubelet[2837]: E0213 19:53:21.953811 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.954046 kubelet[2837]: E0213 19:53:21.954029 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.954046 kubelet[2837]: W0213 19:53:21.954042 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.954122 kubelet[2837]: E0213 19:53:21.954056 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.954289 kubelet[2837]: E0213 19:53:21.954273 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.954289 kubelet[2837]: W0213 19:53:21.954288 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.954362 kubelet[2837]: E0213 19:53:21.954306 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.954530 kubelet[2837]: E0213 19:53:21.954515 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.954530 kubelet[2837]: W0213 19:53:21.954528 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.954609 kubelet[2837]: E0213 19:53:21.954542 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.954767 kubelet[2837]: E0213 19:53:21.954751 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.954767 kubelet[2837]: W0213 19:53:21.954765 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.954876 kubelet[2837]: E0213 19:53:21.954853 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.954985 kubelet[2837]: E0213 19:53:21.954970 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.954985 kubelet[2837]: W0213 19:53:21.954983 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.955097 kubelet[2837]: E0213 19:53:21.955070 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.955301 kubelet[2837]: E0213 19:53:21.955285 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.955301 kubelet[2837]: W0213 19:53:21.955298 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.955368 kubelet[2837]: E0213 19:53:21.955325 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.955666 kubelet[2837]: E0213 19:53:21.955652 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.955666 kubelet[2837]: W0213 19:53:21.955664 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.955742 kubelet[2837]: E0213 19:53:21.955685 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.956109 kubelet[2837]: E0213 19:53:21.956085 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.956109 kubelet[2837]: W0213 19:53:21.956099 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.956177 kubelet[2837]: E0213 19:53:21.956123 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.956403 kubelet[2837]: E0213 19:53:21.956390 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.956403 kubelet[2837]: W0213 19:53:21.956402 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.957117 kubelet[2837]: E0213 19:53:21.956417 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.957117 kubelet[2837]: E0213 19:53:21.956639 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.957117 kubelet[2837]: W0213 19:53:21.956649 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.957117 kubelet[2837]: E0213 19:53:21.956659 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:21.961070 containerd[1627]: time="2025-02-13T19:53:21.960933465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:21.961070 containerd[1627]: time="2025-02-13T19:53:21.960994290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:21.961070 containerd[1627]: time="2025-02-13T19:53:21.961012123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.961290 containerd[1627]: time="2025-02-13T19:53:21.961127461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:21.965085 kubelet[2837]: E0213 19:53:21.964990 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:53:21.965085 kubelet[2837]: W0213 19:53:21.965009 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:53:21.965085 kubelet[2837]: E0213 19:53:21.965032 2837 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:53:22.003294 containerd[1627]: time="2025-02-13T19:53:22.003250488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jdzdm,Uid:ea39717d-8aea-4122-aadd-135b8020c8b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\"" Feb 13 19:53:22.006540 kubelet[2837]: E0213 19:53:22.006507 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:22.011511 containerd[1627]: time="2025-02-13T19:53:22.011474480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:53:22.153024 kubelet[2837]: E0213 19:53:22.152898 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:22.153785 containerd[1627]: time="2025-02-13T19:53:22.153447048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b77b67d49-kwdxm,Uid:ad96c9ac-87c1-4010-8c12-b4122c31a458,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:22.175891 containerd[1627]: time="2025-02-13T19:53:22.175798785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:22.175891 containerd[1627]: time="2025-02-13T19:53:22.175861183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:22.176090 containerd[1627]: time="2025-02-13T19:53:22.175875720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:22.176090 containerd[1627]: time="2025-02-13T19:53:22.176021465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:22.226797 containerd[1627]: time="2025-02-13T19:53:22.226743660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b77b67d49-kwdxm,Uid:ad96c9ac-87c1-4010-8c12-b4122c31a458,Namespace:calico-system,Attempt:0,} returns sandbox id \"97b521dae466164a6e1cdf532108dfe7145ee255f1eeef00bf4807fcbbbb9e1a\"" Feb 13 19:53:22.227434 kubelet[2837]: E0213 19:53:22.227409 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:23.729612 kubelet[2837]: E0213 19:53:23.729560 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:23.926595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841289152.mount: Deactivated successfully. Feb 13 19:53:24.003139 containerd[1627]: time="2025-02-13T19:53:24.003022848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:24.004027 containerd[1627]: time="2025-02-13T19:53:24.003969729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:53:24.005381 containerd[1627]: time="2025-02-13T19:53:24.005345518Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:24.007323 containerd[1627]: time="2025-02-13T19:53:24.007289987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:24.007914 containerd[1627]: time="2025-02-13T19:53:24.007888413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.996374218s" Feb 13 19:53:24.007965 containerd[1627]: time="2025-02-13T19:53:24.007917527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:53:24.009931 containerd[1627]: time="2025-02-13T19:53:24.009908563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:53:24.021682 containerd[1627]: time="2025-02-13T19:53:24.021643094Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:53:24.037308 containerd[1627]: time="2025-02-13T19:53:24.037265129Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3\"" Feb 13 19:53:24.037703 containerd[1627]: time="2025-02-13T19:53:24.037660343Z" level=info msg="StartContainer for \"45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3\"" Feb 13 19:53:24.132868 containerd[1627]: time="2025-02-13T19:53:24.132827239Z" level=info msg="StartContainer for \"45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3\" returns successfully" Feb 13 19:53:24.161333 containerd[1627]: time="2025-02-13T19:53:24.161253547Z" level=info msg="shim disconnected" id=45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3 namespace=k8s.io Feb 13 19:53:24.161333 containerd[1627]: time="2025-02-13T19:53:24.161307749Z" level=warning msg="cleaning up after shim disconnected" id=45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3 namespace=k8s.io Feb 13 19:53:24.161333 containerd[1627]: time="2025-02-13T19:53:24.161316656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:24.418844 kubelet[2837]: E0213 19:53:24.418716 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:24.906518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45e0c2b62dd0fbc5ac19d3cb34dfbd86b32889151694b8008948b2961b4d53f3-rootfs.mount: Deactivated successfully. Feb 13 19:53:25.729912 kubelet[2837]: E0213 19:53:25.729863 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:26.115233 containerd[1627]: time="2025-02-13T19:53:26.115094554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.116084 containerd[1627]: time="2025-02-13T19:53:26.116025336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:53:26.117309 containerd[1627]: time="2025-02-13T19:53:26.117279925Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.119758 containerd[1627]: time="2025-02-13T19:53:26.119690709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:26.120377 containerd[1627]: time="2025-02-13T19:53:26.120346633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.110312683s" Feb 13 19:53:26.120425 containerd[1627]: time="2025-02-13T19:53:26.120379485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:53:26.121406 containerd[1627]: time="2025-02-13T19:53:26.121355130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:53:26.128350 containerd[1627]: time="2025-02-13T19:53:26.128229800Z" level=info msg="CreateContainer within sandbox \"97b521dae466164a6e1cdf532108dfe7145ee255f1eeef00bf4807fcbbbb9e1a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:53:26.141320 containerd[1627]: time="2025-02-13T19:53:26.141279434Z" level=info msg="CreateContainer within sandbox \"97b521dae466164a6e1cdf532108dfe7145ee255f1eeef00bf4807fcbbbb9e1a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b59246297d06427f813beb3ca7e5b1b62a448d261c0752eb283cbc9a02b4003\"" Feb 13 19:53:26.141755 containerd[1627]: time="2025-02-13T19:53:26.141654229Z" level=info msg="StartContainer for \"2b59246297d06427f813beb3ca7e5b1b62a448d261c0752eb283cbc9a02b4003\"" Feb 13 19:53:26.214417 containerd[1627]: time="2025-02-13T19:53:26.214379572Z" level=info msg="StartContainer for \"2b59246297d06427f813beb3ca7e5b1b62a448d261c0752eb283cbc9a02b4003\" returns successfully" Feb 13 19:53:26.423022 kubelet[2837]: E0213 19:53:26.422903 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:26.431503 kubelet[2837]: I0213 19:53:26.431442 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b77b67d49-kwdxm" podStartSLOduration=1.5382242609999999 podStartE2EDuration="5.431424704s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:22.22796482 +0000 UTC m=+23.608658419" lastFinishedPulling="2025-02-13 19:53:26.121165263 +0000 UTC m=+27.501858862" observedRunningTime="2025-02-13 19:53:26.431027888 +0000 UTC m=+27.811721487" watchObservedRunningTime="2025-02-13 19:53:26.431424704 +0000 UTC m=+27.812118303" Feb 13 19:53:27.423564 kubelet[2837]: I0213 19:53:27.423472 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:27.424281 kubelet[2837]: E0213 19:53:27.424251 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:27.731090 kubelet[2837]: E0213 19:53:27.731017 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:28.759635 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:37546.service - OpenSSH per-connection server daemon (10.0.0.1:37546). Feb 13 19:53:28.804777 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 37546 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:28.806806 sshd-session[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:28.813412 systemd-logind[1584]: New session 8 of user core. Feb 13 19:53:28.818465 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:53:28.973961 sshd[3496]: Connection closed by 10.0.0.1 port 37546 Feb 13 19:53:28.974474 sshd-session[3489]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:28.980253 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:37546.service: Deactivated successfully. Feb 13 19:53:28.984697 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:53:28.985628 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:53:28.987027 systemd-logind[1584]: Removed session 8. Feb 13 19:53:29.729703 kubelet[2837]: E0213 19:53:29.729628 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:30.725501 containerd[1627]: time="2025-02-13T19:53:30.725440290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.726379 containerd[1627]: time="2025-02-13T19:53:30.726303802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:53:30.727608 containerd[1627]: time="2025-02-13T19:53:30.727571996Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.731507 containerd[1627]: time="2025-02-13T19:53:30.730300764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:30.732228 containerd[1627]: time="2025-02-13T19:53:30.732094135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.610620002s" Feb 13 19:53:30.732228 containerd[1627]: time="2025-02-13T19:53:30.732137036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:53:30.734374 containerd[1627]: time="2025-02-13T19:53:30.734325358Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:53:30.751959 containerd[1627]: time="2025-02-13T19:53:30.751908537Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15\"" Feb 13 19:53:30.752728 containerd[1627]: time="2025-02-13T19:53:30.752682952Z" level=info msg="StartContainer for \"01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15\"" Feb 13 19:53:31.170457 containerd[1627]: time="2025-02-13T19:53:31.170322365Z" level=info msg="StartContainer for \"01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15\" returns successfully" Feb 13 19:53:31.433876 kubelet[2837]: E0213 19:53:31.433751 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:31.729444 kubelet[2837]: E0213 19:53:31.729374 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:32.332433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15-rootfs.mount: Deactivated successfully. Feb 13 19:53:32.335059 containerd[1627]: time="2025-02-13T19:53:32.334999734Z" level=info msg="shim disconnected" id=01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15 namespace=k8s.io Feb 13 19:53:32.335059 containerd[1627]: time="2025-02-13T19:53:32.335051891Z" level=warning msg="cleaning up after shim disconnected" id=01e97f53883aaac48f9935244873e7ea9527157a0fc3641b57a3a9db57ccdf15 namespace=k8s.io Feb 13 19:53:32.335059 containerd[1627]: time="2025-02-13T19:53:32.335059806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:32.391243 kubelet[2837]: I0213 19:53:32.391215 2837 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:53:32.408420 kubelet[2837]: I0213 19:53:32.408345 2837 topology_manager.go:215] "Topology Admit Handler" podUID="2ba376fe-d115-494d-afe1-f6a4f0570511" podNamespace="calico-system" podName="calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:32.411001 kubelet[2837]: I0213 19:53:32.410960 2837 topology_manager.go:215] "Topology Admit Handler" podUID="e1bbe453-c854-45bd-a73e-26eecfb4fd84" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:32.417234 kubelet[2837]: I0213 19:53:32.416279 2837 topology_manager.go:215] "Topology Admit Handler" podUID="4aa420ca-11be-4692-8d69-b62bdc73431a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:32.417234 kubelet[2837]: I0213 19:53:32.416476 2837 topology_manager.go:215] "Topology Admit Handler" podUID="8d8f81e4-3b2c-4d15-9eda-1f02e5f43765" podNamespace="calico-apiserver" podName="calico-apiserver-b95766759-6rgg4" Feb 13 19:53:32.419048 kubelet[2837]: I0213 19:53:32.419013 2837 topology_manager.go:215] "Topology Admit Handler" podUID="b6680963-b8a1-4333-9361-117009928f59" podNamespace="calico-apiserver" podName="calico-apiserver-b95766759-24q6w" Feb 13 19:53:32.437304 kubelet[2837]: E0213 19:53:32.437273 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:32.437904 containerd[1627]: time="2025-02-13T19:53:32.437858981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:53:32.536548 kubelet[2837]: I0213 19:53:32.536504 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz4t\" (UniqueName: \"kubernetes.io/projected/4aa420ca-11be-4692-8d69-b62bdc73431a-kube-api-access-scz4t\") pod \"coredns-7db6d8ff4d-gzcgv\" (UID: \"4aa420ca-11be-4692-8d69-b62bdc73431a\") " pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:32.536548 kubelet[2837]: I0213 19:53:32.536549 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1bbe453-c854-45bd-a73e-26eecfb4fd84-config-volume\") pod \"coredns-7db6d8ff4d-ppnzz\" (UID: \"e1bbe453-c854-45bd-a73e-26eecfb4fd84\") " pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:32.536811 kubelet[2837]: I0213 19:53:32.536572 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4aa420ca-11be-4692-8d69-b62bdc73431a-config-volume\") pod \"coredns-7db6d8ff4d-gzcgv\" (UID: \"4aa420ca-11be-4692-8d69-b62bdc73431a\") " pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:32.536811 kubelet[2837]: I0213 19:53:32.536608 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5cl\" (UniqueName: \"kubernetes.io/projected/2ba376fe-d115-494d-afe1-f6a4f0570511-kube-api-access-7g5cl\") pod \"calico-kube-controllers-ddcf7667d-wjflz\" (UID: \"2ba376fe-d115-494d-afe1-f6a4f0570511\") " pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:32.536811 kubelet[2837]: I0213 19:53:32.536641 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdrt6\" (UniqueName: \"kubernetes.io/projected/8d8f81e4-3b2c-4d15-9eda-1f02e5f43765-kube-api-access-zdrt6\") pod \"calico-apiserver-b95766759-6rgg4\" (UID: \"8d8f81e4-3b2c-4d15-9eda-1f02e5f43765\") " pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:32.536811 kubelet[2837]: I0213 19:53:32.536666 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6680963-b8a1-4333-9361-117009928f59-calico-apiserver-certs\") pod \"calico-apiserver-b95766759-24q6w\" (UID: \"b6680963-b8a1-4333-9361-117009928f59\") " pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:32.536811 kubelet[2837]: I0213 19:53:32.536752 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlkc\" (UniqueName: \"kubernetes.io/projected/e1bbe453-c854-45bd-a73e-26eecfb4fd84-kube-api-access-rnlkc\") pod \"coredns-7db6d8ff4d-ppnzz\" (UID: \"e1bbe453-c854-45bd-a73e-26eecfb4fd84\") " pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:32.537005 kubelet[2837]: I0213 19:53:32.536896 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d8f81e4-3b2c-4d15-9eda-1f02e5f43765-calico-apiserver-certs\") pod \"calico-apiserver-b95766759-6rgg4\" (UID: \"8d8f81e4-3b2c-4d15-9eda-1f02e5f43765\") " pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:32.537005 kubelet[2837]: I0213 19:53:32.536940 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ba376fe-d115-494d-afe1-f6a4f0570511-tigera-ca-bundle\") pod \"calico-kube-controllers-ddcf7667d-wjflz\" (UID: \"2ba376fe-d115-494d-afe1-f6a4f0570511\") " pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:32.537005 kubelet[2837]: I0213 19:53:32.536962 2837 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867rr\" (UniqueName: \"kubernetes.io/projected/b6680963-b8a1-4333-9361-117009928f59-kube-api-access-867rr\") pod \"calico-apiserver-b95766759-24q6w\" (UID: \"b6680963-b8a1-4333-9361-117009928f59\") " pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:32.715557 containerd[1627]: time="2025-02-13T19:53:32.715497088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:32.717943 kubelet[2837]: E0213 19:53:32.717916 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:32.718350 containerd[1627]: time="2025-02-13T19:53:32.718311336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:32.723484 containerd[1627]: time="2025-02-13T19:53:32.723414855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:53:32.729328 containerd[1627]: time="2025-02-13T19:53:32.729292439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:53:32.732335 kubelet[2837]: E0213 19:53:32.732287 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:32.732618 containerd[1627]: time="2025-02-13T19:53:32.732585988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:33.733871 containerd[1627]: time="2025-02-13T19:53:33.733783846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:0,}" Feb 13 19:53:33.973256 containerd[1627]: time="2025-02-13T19:53:33.972349911Z" level=error msg="Failed to destroy network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.973449 containerd[1627]: time="2025-02-13T19:53:33.973415513Z" level=error msg="encountered an error cleaning up failed sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.973495 containerd[1627]: time="2025-02-13T19:53:33.973478371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.973764 kubelet[2837]: E0213 19:53:33.973723 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.974444 kubelet[2837]: E0213 19:53:33.974328 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:33.974587 kubelet[2837]: E0213 19:53:33.974559 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:33.974756 kubelet[2837]: E0213 19:53:33.974731 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ppnzz" podUID="e1bbe453-c854-45bd-a73e-26eecfb4fd84" Feb 13 19:53:33.986543 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Feb 13 19:53:33.995499 containerd[1627]: time="2025-02-13T19:53:33.995452464Z" level=error msg="Failed to destroy network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.997041 containerd[1627]: time="2025-02-13T19:53:33.996076606Z" level=error msg="encountered an error cleaning up failed sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.997179 containerd[1627]: time="2025-02-13T19:53:33.997160282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.997629 kubelet[2837]: E0213 19:53:33.997577 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:33.997751 kubelet[2837]: E0213 19:53:33.997732 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:33.997879 kubelet[2837]: E0213 19:53:33.997789 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:33.997983 kubelet[2837]: E0213 19:53:33.997836 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" podUID="b6680963-b8a1-4333-9361-117009928f59" Feb 13 19:53:34.001082 containerd[1627]: time="2025-02-13T19:53:34.001011809Z" level=error msg="Failed to destroy network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.001416 containerd[1627]: time="2025-02-13T19:53:34.001385641Z" level=error msg="encountered an error cleaning up failed sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.001471 containerd[1627]: time="2025-02-13T19:53:34.001436616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.001950 kubelet[2837]: E0213 19:53:34.001683 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.001950 kubelet[2837]: E0213 19:53:34.001757 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:34.001950 kubelet[2837]: E0213 19:53:34.001781 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:34.002053 kubelet[2837]: E0213 19:53:34.001836 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" podUID="8d8f81e4-3b2c-4d15-9eda-1f02e5f43765" Feb 13 19:53:34.005599 containerd[1627]: time="2025-02-13T19:53:34.005529254Z" level=error msg="Failed to destroy network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.008438 containerd[1627]: time="2025-02-13T19:53:34.008404475Z" level=error msg="encountered an error cleaning up failed sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.008513 containerd[1627]: time="2025-02-13T19:53:34.008469738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.008713 kubelet[2837]: E0213 19:53:34.008670 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.008808 kubelet[2837]: E0213 19:53:34.008732 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:34.008808 kubelet[2837]: E0213 19:53:34.008751 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:34.008808 kubelet[2837]: E0213 19:53:34.008794 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" podUID="2ba376fe-d115-494d-afe1-f6a4f0570511" Feb 13 19:53:34.012389 containerd[1627]: time="2025-02-13T19:53:34.012339888Z" level=error msg="Failed to destroy network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.012725 containerd[1627]: time="2025-02-13T19:53:34.012695295Z" level=error msg="encountered an error cleaning up failed sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.012757 containerd[1627]: time="2025-02-13T19:53:34.012745760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.012991 kubelet[2837]: E0213 19:53:34.012953 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.013057 kubelet[2837]: E0213 19:53:34.013015 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:34.013057 kubelet[2837]: E0213 19:53:34.013036 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:34.013110 kubelet[2837]: E0213 19:53:34.013079 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gzcgv" podUID="4aa420ca-11be-4692-8d69-b62bdc73431a" Feb 13 19:53:34.019317 containerd[1627]: time="2025-02-13T19:53:34.019271639Z" level=error msg="Failed to destroy network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.019666 containerd[1627]: time="2025-02-13T19:53:34.019632787Z" level=error msg="encountered an error cleaning up failed sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.019715 containerd[1627]: time="2025-02-13T19:53:34.019689874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.019932 kubelet[2837]: E0213 19:53:34.019908 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.019977 kubelet[2837]: E0213 19:53:34.019942 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:34.019977 kubelet[2837]: E0213 19:53:34.019956 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:34.020034 kubelet[2837]: E0213 19:53:34.019991 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:34.087966 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:34.089674 sshd-session[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:34.094036 systemd-logind[1584]: New session 9 of user core. Feb 13 19:53:34.100461 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:53:34.219947 sshd[3806]: Connection closed by 10.0.0.1 port 55838 Feb 13 19:53:34.220341 sshd-session[3786]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:34.224398 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:55838.service: Deactivated successfully. Feb 13 19:53:34.228181 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:53:34.229369 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:53:34.230364 systemd-logind[1584]: Removed session 9. Feb 13 19:53:34.441484 kubelet[2837]: I0213 19:53:34.441349 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7" Feb 13 19:53:34.443126 containerd[1627]: time="2025-02-13T19:53:34.442976637Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:34.443279 containerd[1627]: time="2025-02-13T19:53:34.443249580Z" level=info msg="Ensure that sandbox c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7 in task-service has been cleanup successfully" Feb 13 19:53:34.443526 containerd[1627]: time="2025-02-13T19:53:34.443493537Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:34.443609 containerd[1627]: time="2025-02-13T19:53:34.443522883Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:34.443930 kubelet[2837]: I0213 19:53:34.443890 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b" Feb 13 19:53:34.444090 containerd[1627]: time="2025-02-13T19:53:34.443966536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:53:34.444502 containerd[1627]: time="2025-02-13T19:53:34.444476063Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:34.444679 containerd[1627]: time="2025-02-13T19:53:34.444655299Z" level=info msg="Ensure that sandbox 7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b in task-service has been cleanup successfully" Feb 13 19:53:34.444876 containerd[1627]: time="2025-02-13T19:53:34.444824468Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:34.444876 containerd[1627]: time="2025-02-13T19:53:34.444845908Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:34.445109 kubelet[2837]: E0213 19:53:34.445061 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:34.445526 kubelet[2837]: I0213 19:53:34.445508 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec" Feb 13 19:53:34.446177 containerd[1627]: time="2025-02-13T19:53:34.445895800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:1,}" Feb 13 19:53:34.446488 containerd[1627]: time="2025-02-13T19:53:34.446466462Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:34.446678 containerd[1627]: time="2025-02-13T19:53:34.446651700Z" level=info msg="Ensure that sandbox cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec in task-service has been cleanup successfully" Feb 13 19:53:34.446862 containerd[1627]: time="2025-02-13T19:53:34.446832961Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:34.446916 containerd[1627]: time="2025-02-13T19:53:34.446904895Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:34.446948 kubelet[2837]: I0213 19:53:34.446933 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7" Feb 13 19:53:34.447630 containerd[1627]: time="2025-02-13T19:53:34.447377644Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:34.447630 containerd[1627]: time="2025-02-13T19:53:34.447605262Z" level=info msg="Ensure that sandbox 1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7 in task-service has been cleanup successfully" Feb 13 19:53:34.447773 containerd[1627]: time="2025-02-13T19:53:34.447694068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:1,}" Feb 13 19:53:34.447806 containerd[1627]: time="2025-02-13T19:53:34.447794006Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:34.447843 containerd[1627]: time="2025-02-13T19:53:34.447807932Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:34.448169 containerd[1627]: time="2025-02-13T19:53:34.448140728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:1,}" Feb 13 19:53:34.448823 kubelet[2837]: I0213 19:53:34.448800 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee" Feb 13 19:53:34.449600 containerd[1627]: time="2025-02-13T19:53:34.449562188Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:34.449758 containerd[1627]: time="2025-02-13T19:53:34.449730724Z" level=info msg="Ensure that sandbox a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee in task-service has been cleanup successfully" Feb 13 19:53:34.449795 kubelet[2837]: I0213 19:53:34.449619 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64" Feb 13 19:53:34.450046 containerd[1627]: time="2025-02-13T19:53:34.449968190Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:34.450046 containerd[1627]: time="2025-02-13T19:53:34.449985583Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:34.451153 containerd[1627]: time="2025-02-13T19:53:34.451125694Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:34.451548 containerd[1627]: time="2025-02-13T19:53:34.451357420Z" level=info msg="Ensure that sandbox a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64 in task-service has been cleanup successfully" Feb 13 19:53:34.451599 containerd[1627]: time="2025-02-13T19:53:34.451553417Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:34.451599 containerd[1627]: time="2025-02-13T19:53:34.451584467Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:34.451644 kubelet[2837]: E0213 19:53:34.451616 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:34.451898 containerd[1627]: time="2025-02-13T19:53:34.451845676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:1,}" Feb 13 19:53:34.451898 containerd[1627]: time="2025-02-13T19:53:34.451906231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:53:34.597637 containerd[1627]: time="2025-02-13T19:53:34.597497740Z" level=error msg="Failed to destroy network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.598867 containerd[1627]: time="2025-02-13T19:53:34.598824272Z" level=error msg="encountered an error cleaning up failed sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.599408 containerd[1627]: time="2025-02-13T19:53:34.599302941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.599601 kubelet[2837]: E0213 19:53:34.599543 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.599723 kubelet[2837]: E0213 19:53:34.599621 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:34.599723 kubelet[2837]: E0213 19:53:34.599654 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:34.599723 kubelet[2837]: E0213 19:53:34.599698 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" podUID="8d8f81e4-3b2c-4d15-9eda-1f02e5f43765" Feb 13 19:53:34.628115 containerd[1627]: time="2025-02-13T19:53:34.628007091Z" level=error msg="Failed to destroy network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.629417 containerd[1627]: time="2025-02-13T19:53:34.629369941Z" level=error msg="encountered an error cleaning up failed sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.629845 containerd[1627]: time="2025-02-13T19:53:34.629633286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.630331 kubelet[2837]: E0213 19:53:34.630277 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.630406 kubelet[2837]: E0213 19:53:34.630355 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:34.630406 kubelet[2837]: E0213 19:53:34.630378 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:34.630487 kubelet[2837]: E0213 19:53:34.630426 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" podUID="2ba376fe-d115-494d-afe1-f6a4f0570511" Feb 13 19:53:34.635266 containerd[1627]: time="2025-02-13T19:53:34.635118580Z" level=error msg="Failed to destroy network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.635988 containerd[1627]: time="2025-02-13T19:53:34.635921829Z" level=error msg="encountered an error cleaning up failed sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.636171 containerd[1627]: time="2025-02-13T19:53:34.635945423Z" level=error msg="Failed to destroy network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.636646 containerd[1627]: time="2025-02-13T19:53:34.636592719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.636975 kubelet[2837]: E0213 19:53:34.636935 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.637067 kubelet[2837]: E0213 19:53:34.637004 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:34.637067 kubelet[2837]: E0213 19:53:34.637036 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:34.637138 kubelet[2837]: E0213 19:53:34.637103 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:34.637549 containerd[1627]: time="2025-02-13T19:53:34.637482630Z" level=error msg="encountered an error cleaning up failed sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.637697 containerd[1627]: time="2025-02-13T19:53:34.637671275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.638042 kubelet[2837]: E0213 19:53:34.637902 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.638042 kubelet[2837]: E0213 19:53:34.637936 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:34.638042 kubelet[2837]: E0213 19:53:34.637955 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:34.638219 kubelet[2837]: E0213 19:53:34.637994 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ppnzz" podUID="e1bbe453-c854-45bd-a73e-26eecfb4fd84" Feb 13 19:53:34.645302 containerd[1627]: time="2025-02-13T19:53:34.645236105Z" level=error msg="Failed to destroy network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.645743 containerd[1627]: time="2025-02-13T19:53:34.645715686Z" level=error msg="encountered an error cleaning up failed sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.645805 containerd[1627]: time="2025-02-13T19:53:34.645777352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.646010 kubelet[2837]: E0213 19:53:34.645971 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.646080 kubelet[2837]: E0213 19:53:34.646036 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:34.646080 kubelet[2837]: E0213 19:53:34.646059 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:34.646155 kubelet[2837]: E0213 19:53:34.646113 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gzcgv" podUID="4aa420ca-11be-4692-8d69-b62bdc73431a" Feb 13 19:53:34.649038 containerd[1627]: time="2025-02-13T19:53:34.648967895Z" level=error msg="Failed to destroy network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.649396 containerd[1627]: time="2025-02-13T19:53:34.649355142Z" level=error msg="encountered an error cleaning up failed sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.649452 containerd[1627]: time="2025-02-13T19:53:34.649403192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.649586 kubelet[2837]: E0213 19:53:34.649537 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:34.649651 kubelet[2837]: E0213 19:53:34.649588 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:34.649651 kubelet[2837]: E0213 19:53:34.649622 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:34.649731 kubelet[2837]: E0213 19:53:34.649657 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" podUID="b6680963-b8a1-4333-9361-117009928f59" Feb 13 19:53:34.848856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64-shm.mount: Deactivated successfully. Feb 13 19:53:34.849087 systemd[1]: run-netns-cni\x2d28916650\x2d836a\x2d7bf1\x2d3db2\x2d47ca2b615c38.mount: Deactivated successfully. Feb 13 19:53:34.849291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7-shm.mount: Deactivated successfully. Feb 13 19:53:34.849470 systemd[1]: run-netns-cni\x2dc44c1052\x2d8ccd\x2d1d8e\x2d4ae2\x2db13fe06f3de7.mount: Deactivated successfully. Feb 13 19:53:34.849696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b-shm.mount: Deactivated successfully. Feb 13 19:53:34.849916 systemd[1]: run-netns-cni\x2d292777c7\x2d1ee6\x2d6260\x2dbff4\x2d97a87664bb13.mount: Deactivated successfully. Feb 13 19:53:34.850122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec-shm.mount: Deactivated successfully. Feb 13 19:53:35.453095 kubelet[2837]: I0213 19:53:35.452962 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c" Feb 13 19:53:35.453666 containerd[1627]: time="2025-02-13T19:53:35.453474630Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:35.453968 containerd[1627]: time="2025-02-13T19:53:35.453695706Z" level=info msg="Ensure that sandbox b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c in task-service has been cleanup successfully" Feb 13 19:53:35.454794 containerd[1627]: time="2025-02-13T19:53:35.454716653Z" level=info msg="TearDown network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" successfully" Feb 13 19:53:35.454794 containerd[1627]: time="2025-02-13T19:53:35.454743653Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" returns successfully" Feb 13 19:53:35.455471 containerd[1627]: time="2025-02-13T19:53:35.455068884Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:35.455471 containerd[1627]: time="2025-02-13T19:53:35.455169122Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:35.455471 containerd[1627]: time="2025-02-13T19:53:35.455182789Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:35.455654 kubelet[2837]: I0213 19:53:35.455162 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27" Feb 13 19:53:35.456035 containerd[1627]: time="2025-02-13T19:53:35.455817080Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:35.456035 containerd[1627]: time="2025-02-13T19:53:35.455841556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:53:35.456035 containerd[1627]: time="2025-02-13T19:53:35.456007387Z" level=info msg="Ensure that sandbox eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27 in task-service has been cleanup successfully" Feb 13 19:53:35.456480 containerd[1627]: time="2025-02-13T19:53:35.456396929Z" level=info msg="TearDown network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" successfully" Feb 13 19:53:35.456480 containerd[1627]: time="2025-02-13T19:53:35.456415003Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" returns successfully" Feb 13 19:53:35.456457 systemd[1]: run-netns-cni\x2d70f858e1\x2d8e5d\x2ddb1d\x2d47f3\x2d6cf1bfa783ba.mount: Deactivated successfully. Feb 13 19:53:35.456970 containerd[1627]: time="2025-02-13T19:53:35.456666114Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:35.456970 containerd[1627]: time="2025-02-13T19:53:35.456742328Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:35.456970 containerd[1627]: time="2025-02-13T19:53:35.456753138Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:35.457048 kubelet[2837]: E0213 19:53:35.457003 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:35.457600 containerd[1627]: time="2025-02-13T19:53:35.457280598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:2,}" Feb 13 19:53:35.457664 kubelet[2837]: I0213 19:53:35.457515 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749" Feb 13 19:53:35.458229 containerd[1627]: time="2025-02-13T19:53:35.457911453Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:35.458229 containerd[1627]: time="2025-02-13T19:53:35.458065122Z" level=info msg="Ensure that sandbox c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749 in task-service has been cleanup successfully" Feb 13 19:53:35.458442 containerd[1627]: time="2025-02-13T19:53:35.458420259Z" level=info msg="TearDown network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" successfully" Feb 13 19:53:35.458442 containerd[1627]: time="2025-02-13T19:53:35.458439235Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" returns successfully" Feb 13 19:53:35.458801 containerd[1627]: time="2025-02-13T19:53:35.458781407Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:35.458870 containerd[1627]: time="2025-02-13T19:53:35.458852942Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:35.458870 containerd[1627]: time="2025-02-13T19:53:35.458866498Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:35.459260 containerd[1627]: time="2025-02-13T19:53:35.459222837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:2,}" Feb 13 19:53:35.459282 systemd[1]: run-netns-cni\x2d642e78fc\x2d1d7f\x2d0c11\x2d3c71\x2dafd27c320ea3.mount: Deactivated successfully. Feb 13 19:53:35.460133 kubelet[2837]: I0213 19:53:35.459783 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35" Feb 13 19:53:35.460296 containerd[1627]: time="2025-02-13T19:53:35.460266817Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:35.460507 containerd[1627]: time="2025-02-13T19:53:35.460405218Z" level=info msg="Ensure that sandbox aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35 in task-service has been cleanup successfully" Feb 13 19:53:35.460634 containerd[1627]: time="2025-02-13T19:53:35.460591046Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:35.460634 containerd[1627]: time="2025-02-13T19:53:35.460609952Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:35.460815 containerd[1627]: time="2025-02-13T19:53:35.460796002Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:35.460884 containerd[1627]: time="2025-02-13T19:53:35.460867105Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:35.460884 containerd[1627]: time="2025-02-13T19:53:35.460879689Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:35.461144 containerd[1627]: time="2025-02-13T19:53:35.461118407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:2,}" Feb 13 19:53:35.461923 kubelet[2837]: I0213 19:53:35.461890 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6" Feb 13 19:53:35.462338 containerd[1627]: time="2025-02-13T19:53:35.462310346Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:35.462502 systemd[1]: run-netns-cni\x2d94fc633a\x2d4f0f\x2d1d11\x2dca36\x2d24286cacef77.mount: Deactivated successfully. Feb 13 19:53:35.462757 containerd[1627]: time="2025-02-13T19:53:35.462735434Z" level=info msg="Ensure that sandbox 4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6 in task-service has been cleanup successfully" Feb 13 19:53:35.463455 containerd[1627]: time="2025-02-13T19:53:35.463434697Z" level=info msg="TearDown network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" successfully" Feb 13 19:53:35.463455 containerd[1627]: time="2025-02-13T19:53:35.463452330Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" returns successfully" Feb 13 19:53:35.463637 kubelet[2837]: I0213 19:53:35.463606 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479" Feb 13 19:53:35.464228 containerd[1627]: time="2025-02-13T19:53:35.464089167Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:35.464228 containerd[1627]: time="2025-02-13T19:53:35.464095810Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:35.464228 containerd[1627]: time="2025-02-13T19:53:35.464175970Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:35.464228 containerd[1627]: time="2025-02-13T19:53:35.464189014Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:35.464346 containerd[1627]: time="2025-02-13T19:53:35.464300203Z" level=info msg="Ensure that sandbox 0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479 in task-service has been cleanup successfully" Feb 13 19:53:35.464398 kubelet[2837]: E0213 19:53:35.464378 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:35.464477 containerd[1627]: time="2025-02-13T19:53:35.464462208Z" level=info msg="TearDown network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" successfully" Feb 13 19:53:35.464512 containerd[1627]: time="2025-02-13T19:53:35.464476434Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" returns successfully" Feb 13 19:53:35.464592 containerd[1627]: time="2025-02-13T19:53:35.464571332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:2,}" Feb 13 19:53:35.464948 containerd[1627]: time="2025-02-13T19:53:35.464926279Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:35.465026 containerd[1627]: time="2025-02-13T19:53:35.465000449Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:35.465026 containerd[1627]: time="2025-02-13T19:53:35.465011610Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:35.465378 containerd[1627]: time="2025-02-13T19:53:35.465354544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:53:35.465538 systemd[1]: run-netns-cni\x2db0e44b7e\x2d9461\x2d3978\x2d9f70\x2d55d96aaec7ff.mount: Deactivated successfully. Feb 13 19:53:35.465693 systemd[1]: run-netns-cni\x2de2f11d13\x2d2ee0\x2d1cd4\x2d40d6\x2dcccded14357f.mount: Deactivated successfully. Feb 13 19:53:35.846568 systemd[1]: run-netns-cni\x2d4b7832ef\x2d4060\x2ddbb3\x2ddd12\x2d90547021208b.mount: Deactivated successfully. Feb 13 19:53:36.470072 containerd[1627]: time="2025-02-13T19:53:36.469895561Z" level=error msg="Failed to destroy network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.471467 containerd[1627]: time="2025-02-13T19:53:36.471320678Z" level=error msg="encountered an error cleaning up failed sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.471467 containerd[1627]: time="2025-02-13T19:53:36.471384237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.471628 kubelet[2837]: E0213 19:53:36.471569 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.472200 kubelet[2837]: E0213 19:53:36.471631 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:36.472200 kubelet[2837]: E0213 19:53:36.471658 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:36.472200 kubelet[2837]: E0213 19:53:36.471703 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ppnzz" podUID="e1bbe453-c854-45bd-a73e-26eecfb4fd84" Feb 13 19:53:36.474035 containerd[1627]: time="2025-02-13T19:53:36.473995782Z" level=error msg="Failed to destroy network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.477990 containerd[1627]: time="2025-02-13T19:53:36.476954298Z" level=error msg="Failed to destroy network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.477990 containerd[1627]: time="2025-02-13T19:53:36.477337728Z" level=error msg="encountered an error cleaning up failed sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.477990 containerd[1627]: time="2025-02-13T19:53:36.477607054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.478221 kubelet[2837]: E0213 19:53:36.478113 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.478280 kubelet[2837]: E0213 19:53:36.478243 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:36.478318 kubelet[2837]: E0213 19:53:36.478291 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:36.478737 kubelet[2837]: E0213 19:53:36.478667 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" podUID="8d8f81e4-3b2c-4d15-9eda-1f02e5f43765" Feb 13 19:53:36.479265 containerd[1627]: time="2025-02-13T19:53:36.479233398Z" level=error msg="encountered an error cleaning up failed sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.479324 containerd[1627]: time="2025-02-13T19:53:36.479286789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.479423 kubelet[2837]: E0213 19:53:36.479396 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.479472 kubelet[2837]: E0213 19:53:36.479432 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:36.479472 kubelet[2837]: E0213 19:53:36.479450 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:36.479563 kubelet[2837]: E0213 19:53:36.479484 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" podUID="b6680963-b8a1-4333-9361-117009928f59" Feb 13 19:53:36.492165 containerd[1627]: time="2025-02-13T19:53:36.492125159Z" level=error msg="Failed to destroy network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.492397 containerd[1627]: time="2025-02-13T19:53:36.492347517Z" level=error msg="Failed to destroy network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.492782 containerd[1627]: time="2025-02-13T19:53:36.492750885Z" level=error msg="encountered an error cleaning up failed sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.492842 containerd[1627]: time="2025-02-13T19:53:36.492805237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.492987 containerd[1627]: time="2025-02-13T19:53:36.492960198Z" level=error msg="encountered an error cleaning up failed sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.493110 containerd[1627]: time="2025-02-13T19:53:36.493085864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.493381 kubelet[2837]: E0213 19:53:36.493323 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.493381 kubelet[2837]: E0213 19:53:36.493352 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.493569 kubelet[2837]: E0213 19:53:36.493400 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:36.493569 kubelet[2837]: E0213 19:53:36.493415 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:36.493569 kubelet[2837]: E0213 19:53:36.493426 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:36.493569 kubelet[2837]: E0213 19:53:36.493441 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:36.493673 kubelet[2837]: E0213 19:53:36.493483 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" podUID="2ba376fe-d115-494d-afe1-f6a4f0570511" Feb 13 19:53:36.493673 kubelet[2837]: E0213 19:53:36.493490 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gzcgv" podUID="4aa420ca-11be-4692-8d69-b62bdc73431a" Feb 13 19:53:36.540289 containerd[1627]: time="2025-02-13T19:53:36.512767437Z" level=error msg="Failed to destroy network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.540759 containerd[1627]: time="2025-02-13T19:53:36.540708422Z" level=error msg="encountered an error cleaning up failed sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.540941 containerd[1627]: time="2025-02-13T19:53:36.540777141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.541026 kubelet[2837]: E0213 19:53:36.540979 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:36.541122 kubelet[2837]: E0213 19:53:36.541044 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:36.541122 kubelet[2837]: E0213 19:53:36.541068 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:36.541254 kubelet[2837]: E0213 19:53:36.541115 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:36.848854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826-shm.mount: Deactivated successfully. Feb 13 19:53:36.849643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba-shm.mount: Deactivated successfully. Feb 13 19:53:36.849883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae-shm.mount: Deactivated successfully. Feb 13 19:53:36.850082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43-shm.mount: Deactivated successfully. Feb 13 19:53:36.850302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763-shm.mount: Deactivated successfully. Feb 13 19:53:36.906993 kubelet[2837]: I0213 19:53:36.905902 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:36.906993 kubelet[2837]: E0213 19:53:36.906677 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:37.474468 kubelet[2837]: I0213 19:53:37.474058 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c" Feb 13 19:53:37.474978 containerd[1627]: time="2025-02-13T19:53:37.474608893Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:37.474978 containerd[1627]: time="2025-02-13T19:53:37.474812886Z" level=info msg="Ensure that sandbox c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c in task-service has been cleanup successfully" Feb 13 19:53:37.476247 containerd[1627]: time="2025-02-13T19:53:37.476142663Z" level=info msg="TearDown network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" successfully" Feb 13 19:53:37.476247 containerd[1627]: time="2025-02-13T19:53:37.476168352Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" returns successfully" Feb 13 19:53:37.476680 containerd[1627]: time="2025-02-13T19:53:37.476577209Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:37.476680 containerd[1627]: time="2025-02-13T19:53:37.476666597Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:37.476680 containerd[1627]: time="2025-02-13T19:53:37.476675574Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:37.477620 containerd[1627]: time="2025-02-13T19:53:37.477589791Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:37.477681 containerd[1627]: time="2025-02-13T19:53:37.477660744Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:37.477681 containerd[1627]: time="2025-02-13T19:53:37.477678858Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:37.478079 systemd[1]: run-netns-cni\x2d968d40ad\x2d026a\x2d875c\x2d80ff\x2dc3eee3cb88ad.mount: Deactivated successfully. Feb 13 19:53:37.478405 containerd[1627]: time="2025-02-13T19:53:37.478378572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:3,}" Feb 13 19:53:37.478472 kubelet[2837]: I0213 19:53:37.478457 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826" Feb 13 19:53:37.479147 containerd[1627]: time="2025-02-13T19:53:37.479115716Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" Feb 13 19:53:37.479536 containerd[1627]: time="2025-02-13T19:53:37.479511740Z" level=info msg="Ensure that sandbox a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826 in task-service has been cleanup successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.479879872Z" level=info msg="TearDown network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.479895772Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" returns successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480036235Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480102148Z" level=info msg="TearDown network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480111165Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" returns successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480359482Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480431487Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.480440634Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481064105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:3,}" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481134287Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481284870Z" level=info msg="Ensure that sandbox 16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43 in task-service has been cleanup successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481424843Z" level=info msg="TearDown network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481434671Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" returns successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.481949698Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.482013929Z" level=info msg="TearDown network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" successfully" Feb 13 19:53:37.482218 containerd[1627]: time="2025-02-13T19:53:37.482022124Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" returns successfully" Feb 13 19:53:37.482658 kubelet[2837]: E0213 19:53:37.480571 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:37.482658 kubelet[2837]: I0213 19:53:37.480680 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43" Feb 13 19:53:37.482694 systemd[1]: run-netns-cni\x2d8b36cf54\x2d1667\x2da460\x2d9cfa\x2dbedd67a13468.mount: Deactivated successfully. Feb 13 19:53:37.482793 containerd[1627]: time="2025-02-13T19:53:37.482768285Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:37.482865 containerd[1627]: time="2025-02-13T19:53:37.482844098Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:37.482865 containerd[1627]: time="2025-02-13T19:53:37.482858976Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:37.483789 containerd[1627]: time="2025-02-13T19:53:37.483761030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:53:37.484288 kubelet[2837]: I0213 19:53:37.484266 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763" Feb 13 19:53:37.484636 containerd[1627]: time="2025-02-13T19:53:37.484609393Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" Feb 13 19:53:37.484767 containerd[1627]: time="2025-02-13T19:53:37.484744947Z" level=info msg="Ensure that sandbox 8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763 in task-service has been cleanup successfully" Feb 13 19:53:37.485180 containerd[1627]: time="2025-02-13T19:53:37.485157362Z" level=info msg="TearDown network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" successfully" Feb 13 19:53:37.485234 containerd[1627]: time="2025-02-13T19:53:37.485195734Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" returns successfully" Feb 13 19:53:37.485628 containerd[1627]: time="2025-02-13T19:53:37.485596657Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:37.486252 containerd[1627]: time="2025-02-13T19:53:37.485671707Z" level=info msg="TearDown network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" successfully" Feb 13 19:53:37.486252 containerd[1627]: time="2025-02-13T19:53:37.485685163Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" returns successfully" Feb 13 19:53:37.486423 systemd[1]: run-netns-cni\x2d6c93d032\x2db3bc\x2d7921\x2d9fc5\x2da4a48bc93587.mount: Deactivated successfully. Feb 13 19:53:37.487312 containerd[1627]: time="2025-02-13T19:53:37.487286660Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:37.487374 containerd[1627]: time="2025-02-13T19:53:37.487356992Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:37.487374 containerd[1627]: time="2025-02-13T19:53:37.487369686Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:37.488530 containerd[1627]: time="2025-02-13T19:53:37.488406934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:53:37.490048 systemd[1]: run-netns-cni\x2d1cbf4a2a\x2de3f1\x2d2a04\x2d8314\x2df7e4c9e256e7.mount: Deactivated successfully. Feb 13 19:53:37.779261 kubelet[2837]: I0213 19:53:37.779125 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae" Feb 13 19:53:37.781616 containerd[1627]: time="2025-02-13T19:53:37.781575814Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" Feb 13 19:53:37.781926 containerd[1627]: time="2025-02-13T19:53:37.781762866Z" level=info msg="Ensure that sandbox db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae in task-service has been cleanup successfully" Feb 13 19:53:37.782080 containerd[1627]: time="2025-02-13T19:53:37.782049744Z" level=info msg="TearDown network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" successfully" Feb 13 19:53:37.782080 containerd[1627]: time="2025-02-13T19:53:37.782077065Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" returns successfully" Feb 13 19:53:37.782736 containerd[1627]: time="2025-02-13T19:53:37.782716325Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:37.782805 containerd[1627]: time="2025-02-13T19:53:37.782791366Z" level=info msg="TearDown network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" successfully" Feb 13 19:53:37.782805 containerd[1627]: time="2025-02-13T19:53:37.782803700Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" returns successfully" Feb 13 19:53:37.782955 kubelet[2837]: I0213 19:53:37.782941 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba" Feb 13 19:53:37.782998 containerd[1627]: time="2025-02-13T19:53:37.782982045Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:37.783334 containerd[1627]: time="2025-02-13T19:53:37.783077854Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:37.783334 containerd[1627]: time="2025-02-13T19:53:37.783094355Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:37.783399 kubelet[2837]: E0213 19:53:37.783251 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:37.783549 containerd[1627]: time="2025-02-13T19:53:37.783530234Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" Feb 13 19:53:37.783638 kubelet[2837]: E0213 19:53:37.783583 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:37.783694 containerd[1627]: time="2025-02-13T19:53:37.783592591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:3,}" Feb 13 19:53:37.783694 containerd[1627]: time="2025-02-13T19:53:37.783664206Z" level=info msg="Ensure that sandbox 03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba in task-service has been cleanup successfully" Feb 13 19:53:37.783974 containerd[1627]: time="2025-02-13T19:53:37.783869992Z" level=info msg="TearDown network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" successfully" Feb 13 19:53:37.783974 containerd[1627]: time="2025-02-13T19:53:37.783883067Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" returns successfully" Feb 13 19:53:37.784112 containerd[1627]: time="2025-02-13T19:53:37.784090165Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:37.784224 containerd[1627]: time="2025-02-13T19:53:37.784176998Z" level=info msg="TearDown network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" successfully" Feb 13 19:53:37.784224 containerd[1627]: time="2025-02-13T19:53:37.784190795Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" returns successfully" Feb 13 19:53:37.784411 containerd[1627]: time="2025-02-13T19:53:37.784380701Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:37.784478 containerd[1627]: time="2025-02-13T19:53:37.784446785Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:37.784478 containerd[1627]: time="2025-02-13T19:53:37.784454850Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:37.784747 containerd[1627]: time="2025-02-13T19:53:37.784724336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:3,}" Feb 13 19:53:37.846587 systemd[1]: run-netns-cni\x2d81e44271\x2d46d4\x2d115e\x2d9c88\x2d5bf4960f8003.mount: Deactivated successfully. Feb 13 19:53:37.846769 systemd[1]: run-netns-cni\x2da7ea4b2d\x2d3178\x2d6840\x2da83e\x2dfb7a55955ac8.mount: Deactivated successfully. Feb 13 19:53:38.748629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879546006.mount: Deactivated successfully. Feb 13 19:53:38.780183 containerd[1627]: time="2025-02-13T19:53:38.780131813Z" level=error msg="Failed to destroy network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.780716 containerd[1627]: time="2025-02-13T19:53:38.780565327Z" level=error msg="encountered an error cleaning up failed sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.780716 containerd[1627]: time="2025-02-13T19:53:38.780613377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.780785 kubelet[2837]: E0213 19:53:38.780748 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.781076 kubelet[2837]: E0213 19:53:38.780794 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:38.781076 kubelet[2837]: E0213 19:53:38.780815 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:38.781076 kubelet[2837]: E0213 19:53:38.780852 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:38.786731 kubelet[2837]: I0213 19:53:38.786702 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e" Feb 13 19:53:38.787419 containerd[1627]: time="2025-02-13T19:53:38.787177321Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" Feb 13 19:53:38.788127 containerd[1627]: time="2025-02-13T19:53:38.788058456Z" level=info msg="Ensure that sandbox e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e in task-service has been cleanup successfully" Feb 13 19:53:38.788683 containerd[1627]: time="2025-02-13T19:53:38.788653132Z" level=info msg="TearDown network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" successfully" Feb 13 19:53:38.788729 containerd[1627]: time="2025-02-13T19:53:38.788680064Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" returns successfully" Feb 13 19:53:38.789393 containerd[1627]: time="2025-02-13T19:53:38.788937156Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:38.789393 containerd[1627]: time="2025-02-13T19:53:38.789129989Z" level=info msg="TearDown network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" successfully" Feb 13 19:53:38.789393 containerd[1627]: time="2025-02-13T19:53:38.789141620Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" returns successfully" Feb 13 19:53:38.789393 containerd[1627]: time="2025-02-13T19:53:38.789377994Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:38.789516 containerd[1627]: time="2025-02-13T19:53:38.789449689Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:38.789516 containerd[1627]: time="2025-02-13T19:53:38.789469105Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:38.789877 containerd[1627]: time="2025-02-13T19:53:38.789761223Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:38.789988 containerd[1627]: time="2025-02-13T19:53:38.789962021Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:38.790033 containerd[1627]: time="2025-02-13T19:53:38.789986266Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:38.790590 containerd[1627]: time="2025-02-13T19:53:38.790525608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:4,}" Feb 13 19:53:38.811235 containerd[1627]: time="2025-02-13T19:53:38.809707747Z" level=error msg="Failed to destroy network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.812118 containerd[1627]: time="2025-02-13T19:53:38.810565928Z" level=error msg="encountered an error cleaning up failed sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.812573 containerd[1627]: time="2025-02-13T19:53:38.812544764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.812863 kubelet[2837]: E0213 19:53:38.812820 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.812915 kubelet[2837]: E0213 19:53:38.812883 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:38.812915 kubelet[2837]: E0213 19:53:38.812904 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gzcgv" Feb 13 19:53:38.812966 kubelet[2837]: E0213 19:53:38.812942 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gzcgv_kube-system(4aa420ca-11be-4692-8d69-b62bdc73431a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gzcgv" podUID="4aa420ca-11be-4692-8d69-b62bdc73431a" Feb 13 19:53:38.821113 containerd[1627]: time="2025-02-13T19:53:38.821062387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:38.823778 containerd[1627]: time="2025-02-13T19:53:38.823727311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:53:38.825475 containerd[1627]: time="2025-02-13T19:53:38.825426822Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:38.830877 containerd[1627]: time="2025-02-13T19:53:38.830848352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:38.831736 containerd[1627]: time="2025-02-13T19:53:38.831453959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.393556596s" Feb 13 19:53:38.832006 containerd[1627]: time="2025-02-13T19:53:38.831990627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:53:38.855356 containerd[1627]: time="2025-02-13T19:53:38.855309543Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:53:38.858866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db-shm.mount: Deactivated successfully. Feb 13 19:53:38.859093 systemd[1]: run-netns-cni\x2d13ac56f4\x2df28c\x2d24a4\x2dd7cd\x2d6d3d2dbcd44a.mount: Deactivated successfully. Feb 13 19:53:38.859290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e-shm.mount: Deactivated successfully. Feb 13 19:53:38.895561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830650206.mount: Deactivated successfully. Feb 13 19:53:38.900276 containerd[1627]: time="2025-02-13T19:53:38.900225530Z" level=error msg="Failed to destroy network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.903220 containerd[1627]: time="2025-02-13T19:53:38.902421804Z" level=error msg="encountered an error cleaning up failed sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.903220 containerd[1627]: time="2025-02-13T19:53:38.902519928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.903220 containerd[1627]: time="2025-02-13T19:53:38.902427555Z" level=info msg="CreateContainer within sandbox \"2f79cd1a2ada36e904313587e57c7e924968180db8305aec3cab24eb43d6a86b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fdced01bbd60a009278316f2c95c9de60ff8441b108da1b5ce905731772479e7\"" Feb 13 19:53:38.904034 kubelet[2837]: E0213 19:53:38.903888 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.904034 kubelet[2837]: E0213 19:53:38.903932 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:38.904034 kubelet[2837]: E0213 19:53:38.903952 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" Feb 13 19:53:38.904140 kubelet[2837]: E0213 19:53:38.903981 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-6rgg4_calico-apiserver(8d8f81e4-3b2c-4d15-9eda-1f02e5f43765)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" podUID="8d8f81e4-3b2c-4d15-9eda-1f02e5f43765" Feb 13 19:53:38.904067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917-shm.mount: Deactivated successfully. Feb 13 19:53:38.905432 containerd[1627]: time="2025-02-13T19:53:38.905269361Z" level=info msg="StartContainer for \"fdced01bbd60a009278316f2c95c9de60ff8441b108da1b5ce905731772479e7\"" Feb 13 19:53:38.910623 containerd[1627]: time="2025-02-13T19:53:38.910552341Z" level=error msg="Failed to destroy network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.911937 containerd[1627]: time="2025-02-13T19:53:38.911904801Z" level=error msg="encountered an error cleaning up failed sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.911986 containerd[1627]: time="2025-02-13T19:53:38.911964212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.914269 kubelet[2837]: E0213 19:53:38.914229 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.914324 kubelet[2837]: E0213 19:53:38.914293 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:38.914324 kubelet[2837]: E0213 19:53:38.914314 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ppnzz" Feb 13 19:53:38.916498 kubelet[2837]: E0213 19:53:38.914420 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ppnzz_kube-system(e1bbe453-c854-45bd-a73e-26eecfb4fd84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ppnzz" podUID="e1bbe453-c854-45bd-a73e-26eecfb4fd84" Feb 13 19:53:38.915677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6-shm.mount: Deactivated successfully. Feb 13 19:53:38.917087 containerd[1627]: time="2025-02-13T19:53:38.917044190Z" level=error msg="Failed to destroy network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.917664 containerd[1627]: time="2025-02-13T19:53:38.917641993Z" level=error msg="encountered an error cleaning up failed sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.917791 containerd[1627]: time="2025-02-13T19:53:38.917772478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.918244 kubelet[2837]: E0213 19:53:38.918071 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.918401 kubelet[2837]: E0213 19:53:38.918384 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:38.918477 kubelet[2837]: E0213 19:53:38.918463 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" Feb 13 19:53:38.918577 kubelet[2837]: E0213 19:53:38.918553 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b95766759-24q6w_calico-apiserver(b6680963-b8a1-4333-9361-117009928f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" podUID="b6680963-b8a1-4333-9361-117009928f59" Feb 13 19:53:38.938742 containerd[1627]: time="2025-02-13T19:53:38.938691036Z" level=error msg="Failed to destroy network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.939190 containerd[1627]: time="2025-02-13T19:53:38.939159486Z" level=error msg="encountered an error cleaning up failed sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.940345 containerd[1627]: time="2025-02-13T19:53:38.939252951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.940418 kubelet[2837]: E0213 19:53:38.939500 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.940418 kubelet[2837]: E0213 19:53:38.939559 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:38.940418 kubelet[2837]: E0213 19:53:38.939583 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" Feb 13 19:53:38.940514 kubelet[2837]: E0213 19:53:38.939627 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddcf7667d-wjflz_calico-system(2ba376fe-d115-494d-afe1-f6a4f0570511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" podUID="2ba376fe-d115-494d-afe1-f6a4f0570511" Feb 13 19:53:38.946005 containerd[1627]: time="2025-02-13T19:53:38.945958512Z" level=error msg="Failed to destroy network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.946333 containerd[1627]: time="2025-02-13T19:53:38.946309612Z" level=error msg="encountered an error cleaning up failed sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.946383 containerd[1627]: time="2025-02-13T19:53:38.946364344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.946619 kubelet[2837]: E0213 19:53:38.946579 2837 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:53:38.946670 kubelet[2837]: E0213 19:53:38.946642 2837 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:38.946698 kubelet[2837]: E0213 19:53:38.946666 2837 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jzmps" Feb 13 19:53:38.946771 kubelet[2837]: E0213 19:53:38.946735 2837 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jzmps_calico-system(b48fef47-3cfd-4e49-87b0-9de0481fb342)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jzmps" podUID="b48fef47-3cfd-4e49-87b0-9de0481fb342" Feb 13 19:53:39.099035 containerd[1627]: time="2025-02-13T19:53:39.098905681Z" level=info msg="StartContainer for \"fdced01bbd60a009278316f2c95c9de60ff8441b108da1b5ce905731772479e7\" returns successfully" Feb 13 19:53:39.102057 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:53:39.102257 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:53:39.227579 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:55854.service - OpenSSH per-connection server daemon (10.0.0.1:55854). Feb 13 19:53:39.276848 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 55854 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:39.278917 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:39.285428 systemd-logind[1584]: New session 10 of user core. Feb 13 19:53:39.291072 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:53:39.423093 sshd[4599]: Connection closed by 10.0.0.1 port 55854 Feb 13 19:53:39.423371 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:39.427905 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:55854.service: Deactivated successfully. Feb 13 19:53:39.430389 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:53:39.430438 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:53:39.431741 systemd-logind[1584]: Removed session 10. Feb 13 19:53:39.793176 kubelet[2837]: I0213 19:53:39.793140 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad" Feb 13 19:53:39.794754 kubelet[2837]: I0213 19:53:39.794723 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51" Feb 13 19:53:39.795190 containerd[1627]: time="2025-02-13T19:53:39.795159057Z" level=info msg="StopPodSandbox for \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\"" Feb 13 19:53:39.795805 containerd[1627]: time="2025-02-13T19:53:39.795653656Z" level=info msg="Ensure that sandbox 5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51 in task-service has been cleanup successfully" Feb 13 19:53:39.795909 containerd[1627]: time="2025-02-13T19:53:39.795882896Z" level=info msg="TearDown network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" successfully" Feb 13 19:53:39.795909 containerd[1627]: time="2025-02-13T19:53:39.795898746Z" level=info msg="StopPodSandbox for \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" returns successfully" Feb 13 19:53:39.796511 containerd[1627]: time="2025-02-13T19:53:39.796399396Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" Feb 13 19:53:39.796511 containerd[1627]: time="2025-02-13T19:53:39.796483013Z" level=info msg="TearDown network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" successfully" Feb 13 19:53:39.796511 containerd[1627]: time="2025-02-13T19:53:39.796493283Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" returns successfully" Feb 13 19:53:39.796862 containerd[1627]: time="2025-02-13T19:53:39.796793926Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:39.796976 containerd[1627]: time="2025-02-13T19:53:39.796950671Z" level=info msg="TearDown network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" successfully" Feb 13 19:53:39.796976 containerd[1627]: time="2025-02-13T19:53:39.796969947Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" returns successfully" Feb 13 19:53:39.797365 containerd[1627]: time="2025-02-13T19:53:39.797340874Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:39.797475 containerd[1627]: time="2025-02-13T19:53:39.797416305Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:39.797524 containerd[1627]: time="2025-02-13T19:53:39.797473262Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:39.798160 kubelet[2837]: E0213 19:53:39.798120 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.798961 containerd[1627]: time="2025-02-13T19:53:39.798890734Z" level=info msg="StopPodSandbox for \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\"" Feb 13 19:53:39.799427 containerd[1627]: time="2025-02-13T19:53:39.799109244Z" level=info msg="Ensure that sandbox 62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad in task-service has been cleanup successfully" Feb 13 19:53:39.799427 containerd[1627]: time="2025-02-13T19:53:39.799322394Z" level=info msg="TearDown network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" successfully" Feb 13 19:53:39.799427 containerd[1627]: time="2025-02-13T19:53:39.799339627Z" level=info msg="StopPodSandbox for \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" returns successfully" Feb 13 19:53:39.800103 containerd[1627]: time="2025-02-13T19:53:39.799478687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:4,}" Feb 13 19:53:39.801057 containerd[1627]: time="2025-02-13T19:53:39.800994113Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" Feb 13 19:53:39.801114 containerd[1627]: time="2025-02-13T19:53:39.801092116Z" level=info msg="TearDown network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" successfully" Feb 13 19:53:39.801114 containerd[1627]: time="2025-02-13T19:53:39.801110501Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" returns successfully" Feb 13 19:53:39.801639 containerd[1627]: time="2025-02-13T19:53:39.801585624Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:39.801868 containerd[1627]: time="2025-02-13T19:53:39.801830813Z" level=info msg="TearDown network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" successfully" Feb 13 19:53:39.801994 containerd[1627]: time="2025-02-13T19:53:39.801845160Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" returns successfully" Feb 13 19:53:39.803357 containerd[1627]: time="2025-02-13T19:53:39.803075029Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:39.803357 containerd[1627]: time="2025-02-13T19:53:39.803176420Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:39.803357 containerd[1627]: time="2025-02-13T19:53:39.803190767Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:39.804130 kubelet[2837]: I0213 19:53:39.804095 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69" Feb 13 19:53:39.804429 containerd[1627]: time="2025-02-13T19:53:39.804223886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:53:39.804619 containerd[1627]: time="2025-02-13T19:53:39.804602247Z" level=info msg="StopPodSandbox for \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\"" Feb 13 19:53:39.805082 containerd[1627]: time="2025-02-13T19:53:39.805066468Z" level=info msg="Ensure that sandbox 88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69 in task-service has been cleanup successfully" Feb 13 19:53:39.805701 containerd[1627]: time="2025-02-13T19:53:39.805526502Z" level=info msg="TearDown network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" successfully" Feb 13 19:53:39.805701 containerd[1627]: time="2025-02-13T19:53:39.805543324Z" level=info msg="StopPodSandbox for \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" returns successfully" Feb 13 19:53:39.807232 containerd[1627]: time="2025-02-13T19:53:39.807161983Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" Feb 13 19:53:39.807897 containerd[1627]: time="2025-02-13T19:53:39.807299070Z" level=info msg="TearDown network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" successfully" Feb 13 19:53:39.807897 containerd[1627]: time="2025-02-13T19:53:39.807494868Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" returns successfully" Feb 13 19:53:39.808989 containerd[1627]: time="2025-02-13T19:53:39.808850383Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:39.809139 containerd[1627]: time="2025-02-13T19:53:39.809109228Z" level=info msg="TearDown network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" successfully" Feb 13 19:53:39.809139 containerd[1627]: time="2025-02-13T19:53:39.809125620Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" returns successfully" Feb 13 19:53:39.810034 containerd[1627]: time="2025-02-13T19:53:39.809886177Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:39.810140 kubelet[2837]: I0213 19:53:39.810098 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917" Feb 13 19:53:39.810664 containerd[1627]: time="2025-02-13T19:53:39.809980535Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:39.810664 containerd[1627]: time="2025-02-13T19:53:39.810639652Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:39.811196 containerd[1627]: time="2025-02-13T19:53:39.810648659Z" level=info msg="StopPodSandbox for \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\"" Feb 13 19:53:39.811196 containerd[1627]: time="2025-02-13T19:53:39.810919668Z" level=info msg="Ensure that sandbox e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917 in task-service has been cleanup successfully" Feb 13 19:53:39.812125 containerd[1627]: time="2025-02-13T19:53:39.812102739Z" level=info msg="TearDown network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" successfully" Feb 13 19:53:39.812125 containerd[1627]: time="2025-02-13T19:53:39.812123478Z" level=info msg="StopPodSandbox for \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" returns successfully" Feb 13 19:53:39.813128 containerd[1627]: time="2025-02-13T19:53:39.812954198Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:39.813358 containerd[1627]: time="2025-02-13T19:53:39.813337357Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:39.813465 containerd[1627]: time="2025-02-13T19:53:39.813442324Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:39.813524 containerd[1627]: time="2025-02-13T19:53:39.812971570Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" Feb 13 19:53:39.813598 containerd[1627]: time="2025-02-13T19:53:39.813579943Z" level=info msg="TearDown network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" successfully" Feb 13 19:53:39.813626 containerd[1627]: time="2025-02-13T19:53:39.813597947Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" returns successfully" Feb 13 19:53:39.814279 containerd[1627]: time="2025-02-13T19:53:39.813913239Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:39.814279 containerd[1627]: time="2025-02-13T19:53:39.813995684Z" level=info msg="TearDown network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" successfully" Feb 13 19:53:39.814279 containerd[1627]: time="2025-02-13T19:53:39.814005893Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" returns successfully" Feb 13 19:53:39.814279 containerd[1627]: time="2025-02-13T19:53:39.814142218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:5,}" Feb 13 19:53:39.814972 containerd[1627]: time="2025-02-13T19:53:39.814953682Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:39.815057 containerd[1627]: time="2025-02-13T19:53:39.815043019Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:39.815057 containerd[1627]: time="2025-02-13T19:53:39.815055102Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:39.815539 containerd[1627]: time="2025-02-13T19:53:39.815498565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:53:39.815839 kubelet[2837]: I0213 19:53:39.815815 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6" Feb 13 19:53:39.816297 containerd[1627]: time="2025-02-13T19:53:39.816269473Z" level=info msg="StopPodSandbox for \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\"" Feb 13 19:53:39.816474 containerd[1627]: time="2025-02-13T19:53:39.816455241Z" level=info msg="Ensure that sandbox 7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6 in task-service has been cleanup successfully" Feb 13 19:53:39.816632 containerd[1627]: time="2025-02-13T19:53:39.816614540Z" level=info msg="TearDown network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" successfully" Feb 13 19:53:39.816632 containerd[1627]: time="2025-02-13T19:53:39.816630109Z" level=info msg="StopPodSandbox for \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" returns successfully" Feb 13 19:53:39.817898 containerd[1627]: time="2025-02-13T19:53:39.817743580Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" Feb 13 19:53:39.817898 containerd[1627]: time="2025-02-13T19:53:39.817833188Z" level=info msg="TearDown network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" successfully" Feb 13 19:53:39.817898 containerd[1627]: time="2025-02-13T19:53:39.817843948Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" returns successfully" Feb 13 19:53:39.818586 containerd[1627]: time="2025-02-13T19:53:39.818512413Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:39.819086 kubelet[2837]: I0213 19:53:39.819059 2837 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db" Feb 13 19:53:39.819605 containerd[1627]: time="2025-02-13T19:53:39.819532389Z" level=info msg="TearDown network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" successfully" Feb 13 19:53:39.819605 containerd[1627]: time="2025-02-13T19:53:39.819555663Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" returns successfully" Feb 13 19:53:39.820247 containerd[1627]: time="2025-02-13T19:53:39.819856477Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:39.820247 containerd[1627]: time="2025-02-13T19:53:39.819936137Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:39.820247 containerd[1627]: time="2025-02-13T19:53:39.819945244Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:39.820247 containerd[1627]: time="2025-02-13T19:53:39.819990168Z" level=info msg="StopPodSandbox for \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\"" Feb 13 19:53:39.820247 containerd[1627]: time="2025-02-13T19:53:39.820142524Z" level=info msg="Ensure that sandbox 0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db in task-service has been cleanup successfully" Feb 13 19:53:39.820466 containerd[1627]: time="2025-02-13T19:53:39.820341588Z" level=info msg="TearDown network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" successfully" Feb 13 19:53:39.820466 containerd[1627]: time="2025-02-13T19:53:39.820353050Z" level=info msg="StopPodSandbox for \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" returns successfully" Feb 13 19:53:39.820634 kubelet[2837]: E0213 19:53:39.820604 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.820846 containerd[1627]: time="2025-02-13T19:53:39.820820267Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" Feb 13 19:53:39.820940 containerd[1627]: time="2025-02-13T19:53:39.820905286Z" level=info msg="TearDown network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" successfully" Feb 13 19:53:39.821034 containerd[1627]: time="2025-02-13T19:53:39.821012898Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" returns successfully" Feb 13 19:53:39.821084 containerd[1627]: time="2025-02-13T19:53:39.820823573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:4,}" Feb 13 19:53:39.821347 containerd[1627]: time="2025-02-13T19:53:39.821322920Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:39.821810 containerd[1627]: time="2025-02-13T19:53:39.821483703Z" level=info msg="TearDown network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" successfully" Feb 13 19:53:39.821810 containerd[1627]: time="2025-02-13T19:53:39.821495705Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" returns successfully" Feb 13 19:53:39.822372 containerd[1627]: time="2025-02-13T19:53:39.822276090Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:39.822452 containerd[1627]: time="2025-02-13T19:53:39.822425240Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:39.822528 containerd[1627]: time="2025-02-13T19:53:39.822506803Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:39.822675 kubelet[2837]: E0213 19:53:39.822645 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:39.822884 containerd[1627]: time="2025-02-13T19:53:39.822823618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:4,}" Feb 13 19:53:39.849976 systemd[1]: run-netns-cni\x2db0ace7be\x2dfe80\x2ddb7d\x2d0b5c\x2d941d351813e5.mount: Deactivated successfully. Feb 13 19:53:39.850445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69-shm.mount: Deactivated successfully. Feb 13 19:53:39.850602 systemd[1]: run-netns-cni\x2d8eb97660\x2dc030\x2d9b82\x2d48a7\x2d6ed81d5db36f.mount: Deactivated successfully. Feb 13 19:53:39.850735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51-shm.mount: Deactivated successfully. Feb 13 19:53:39.850866 systemd[1]: run-netns-cni\x2d8b764f9a\x2d6268\x2df016\x2d7c95\x2da40dae82f6a2.mount: Deactivated successfully. Feb 13 19:53:39.850999 systemd[1]: run-netns-cni\x2da1149dbc\x2dea36\x2d36c1\x2d12df\x2ddff730bb83e7.mount: Deactivated successfully. Feb 13 19:53:39.851125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad-shm.mount: Deactivated successfully. Feb 13 19:53:39.851720 systemd[1]: run-netns-cni\x2d2e23879b\x2deb8b\x2d0271\x2d7a01\x2d31d15986490c.mount: Deactivated successfully. Feb 13 19:53:39.851856 systemd[1]: run-netns-cni\x2dceb74f35\x2df27e\x2d1344\x2de5c0\x2d3059c8dfd1cd.mount: Deactivated successfully. Feb 13 19:53:40.100729 systemd-networkd[1245]: calie2b3f3c84db: Link UP Feb 13 19:53:40.102732 systemd-networkd[1245]: calie2b3f3c84db: Gained carrier Feb 13 19:53:40.186245 kubelet[2837]: I0213 19:53:40.186157 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jdzdm" podStartSLOduration=2.363908249 podStartE2EDuration="19.186135372s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:22.010613289 +0000 UTC m=+23.391306888" lastFinishedPulling="2025-02-13 19:53:38.832840422 +0000 UTC m=+40.213534011" observedRunningTime="2025-02-13 19:53:39.812848439 +0000 UTC m=+41.193542039" watchObservedRunningTime="2025-02-13 19:53:40.186135372 +0000 UTC m=+41.566828971" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:39.865 [INFO][4624] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:39.887 [INFO][4624] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0 calico-kube-controllers-ddcf7667d- calico-system 2ba376fe-d115-494d-afe1-f6a4f0570511 775 0 2025-02-13 19:53:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ddcf7667d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-ddcf7667d-wjflz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie2b3f3c84db [] []}} ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:39.887 [INFO][4624] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:39.957 [INFO][4663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" HandleID="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Workload="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.002 [INFO][4663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" HandleID="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Workload="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-ddcf7667d-wjflz", "timestamp":"2025-02-13 19:53:39.957236741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.002 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.002 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.002 [INFO][4663] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.004 [INFO][4663] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.009 [INFO][4663] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.012 [INFO][4663] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.014 [INFO][4663] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.015 [INFO][4663] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.015 [INFO][4663] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.016 [INFO][4663] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.031 [INFO][4663] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.089 [INFO][4663] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.089 [INFO][4663] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" host="localhost" Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.089 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:40.189903 containerd[1627]: 2025-02-13 19:53:40.089 [INFO][4663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" HandleID="k8s-pod-network.a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Workload="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.093 [INFO][4624] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0", GenerateName:"calico-kube-controllers-ddcf7667d-", Namespace:"calico-system", SelfLink:"", UID:"2ba376fe-d115-494d-afe1-f6a4f0570511", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddcf7667d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-ddcf7667d-wjflz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2b3f3c84db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.093 [INFO][4624] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.093 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2b3f3c84db ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.130 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.130 [INFO][4624] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0", GenerateName:"calico-kube-controllers-ddcf7667d-", Namespace:"calico-system", SelfLink:"", UID:"2ba376fe-d115-494d-afe1-f6a4f0570511", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddcf7667d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd", Pod:"calico-kube-controllers-ddcf7667d-wjflz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2b3f3c84db", MAC:"26:7c:5a:71:a0:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.190570 containerd[1627]: 2025-02-13 19:53:40.186 [INFO][4624] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd" Namespace="calico-system" Pod="calico-kube-controllers-ddcf7667d-wjflz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ddcf7667d--wjflz-eth0" Feb 13 19:53:40.451039 systemd-networkd[1245]: calida89fc3475a: Link UP Feb 13 19:53:40.452304 systemd-networkd[1245]: calida89fc3475a: Gained carrier Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.306 [INFO][4684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.325 [INFO][4684] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b95766759--24q6w-eth0 calico-apiserver-b95766759- calico-apiserver b6680963-b8a1-4333-9361-117009928f59 784 0 2025-02-13 19:53:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b95766759 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b95766759-24q6w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida89fc3475a [] []}} ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.325 [INFO][4684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.383 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" HandleID="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Workload="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.397 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" HandleID="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Workload="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b95766759-24q6w", "timestamp":"2025-02-13 19:53:40.383117848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.398 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.398 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.398 [INFO][4699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.401 [INFO][4699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.408 [INFO][4699] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.414 [INFO][4699] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.418 [INFO][4699] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.420 [INFO][4699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.420 [INFO][4699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.422 [INFO][4699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.426 [INFO][4699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.435 [INFO][4699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.435 [INFO][4699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" host="localhost" Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.435 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:40.472552 containerd[1627]: 2025-02-13 19:53:40.435 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" HandleID="k8s-pod-network.23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Workload="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.439 [INFO][4684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b95766759--24q6w-eth0", GenerateName:"calico-apiserver-b95766759-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6680963-b8a1-4333-9361-117009928f59", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b95766759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b95766759-24q6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida89fc3475a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.439 [INFO][4684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.439 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida89fc3475a ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.451 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.452 [INFO][4684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b95766759--24q6w-eth0", GenerateName:"calico-apiserver-b95766759-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6680963-b8a1-4333-9361-117009928f59", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b95766759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d", Pod:"calico-apiserver-b95766759-24q6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida89fc3475a", MAC:"de:e1:ee:3b:dc:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.473557 containerd[1627]: 2025-02-13 19:53:40.465 [INFO][4684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-24q6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--24q6w-eth0" Feb 13 19:53:40.550390 containerd[1627]: time="2025-02-13T19:53:40.550181305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:40.550390 containerd[1627]: time="2025-02-13T19:53:40.550305419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:40.550390 containerd[1627]: time="2025-02-13T19:53:40.550326789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:40.553371 containerd[1627]: time="2025-02-13T19:53:40.553030314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:40.553371 containerd[1627]: time="2025-02-13T19:53:40.553103792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:40.553371 containerd[1627]: time="2025-02-13T19:53:40.553292617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:40.553747 containerd[1627]: time="2025-02-13T19:53:40.553051293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:40.555432 containerd[1627]: time="2025-02-13T19:53:40.555337365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:40.674754 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:40.699507 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:40.745241 kernel: bpftool[5010]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:53:40.781250 containerd[1627]: time="2025-02-13T19:53:40.779257122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-24q6w,Uid:b6680963-b8a1-4333-9361-117009928f59,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d\"" Feb 13 19:53:40.788259 containerd[1627]: time="2025-02-13T19:53:40.788231550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:53:40.796371 systemd-networkd[1245]: cali0fdbb48b136: Link UP Feb 13 19:53:40.797699 systemd-networkd[1245]: cali0fdbb48b136: Gained carrier Feb 13 19:53:40.805547 containerd[1627]: time="2025-02-13T19:53:40.805400433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddcf7667d-wjflz,Uid:2ba376fe-d115-494d-afe1-f6a4f0570511,Namespace:calico-system,Attempt:4,} returns sandbox id \"a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd\"" Feb 13 19:53:40.824834 kubelet[2837]: E0213 19:53:40.824809 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.491 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.511 [INFO][4783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jzmps-eth0 csi-node-driver- calico-system b48fef47-3cfd-4e49-87b0-9de0481fb342 658 0 2025-02-13 19:53:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jzmps eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0fdbb48b136 [] []}} ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.511 [INFO][4783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.597 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" HandleID="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Workload="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.627 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" HandleID="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Workload="localhost-k8s-csi--node--driver--jzmps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jzmps", "timestamp":"2025-02-13 19:53:40.595898903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.627 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.627 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.627 [INFO][4891] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.631 [INFO][4891] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.646 [INFO][4891] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.659 [INFO][4891] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.663 [INFO][4891] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.669 [INFO][4891] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.669 [INFO][4891] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.681 [INFO][4891] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752 Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.707 [INFO][4891] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.719 [INFO][4891] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.719 [INFO][4891] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" host="localhost" Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.721 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:40.931095 containerd[1627]: 2025-02-13 19:53:40.721 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" HandleID="k8s-pod-network.5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Workload="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.779 [INFO][4783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzmps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b48fef47-3cfd-4e49-87b0-9de0481fb342", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jzmps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdbb48b136", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.782 [INFO][4783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.782 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fdbb48b136 ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.798 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.800 [INFO][4783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jzmps-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b48fef47-3cfd-4e49-87b0-9de0481fb342", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752", Pod:"csi-node-driver-jzmps", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0fdbb48b136", MAC:"36:64:c0:9a:83:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:40.931985 containerd[1627]: 2025-02-13 19:53:40.927 [INFO][4783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752" Namespace="calico-system" Pod="csi-node-driver-jzmps" WorkloadEndpoint="localhost-k8s-csi--node--driver--jzmps-eth0" Feb 13 19:53:41.026431 containerd[1627]: time="2025-02-13T19:53:41.024967535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.026431 containerd[1627]: time="2025-02-13T19:53:41.025035593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.026431 containerd[1627]: time="2025-02-13T19:53:41.025046403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.026431 containerd[1627]: time="2025-02-13T19:53:41.025143515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.026709 systemd-networkd[1245]: cali2f41e108ad5: Link UP Feb 13 19:53:41.026976 systemd-networkd[1245]: cali2f41e108ad5: Gained carrier Feb 13 19:53:41.057881 systemd-networkd[1245]: vxlan.calico: Link UP Feb 13 19:53:41.058073 systemd-networkd[1245]: vxlan.calico: Gained carrier Feb 13 19:53:41.071941 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:41.104232 containerd[1627]: time="2025-02-13T19:53:41.104153012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jzmps,Uid:b48fef47-3cfd-4e49-87b0-9de0481fb342,Namespace:calico-system,Attempt:5,} returns sandbox id \"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752\"" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.508 [INFO][4833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.537 [INFO][4833] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0 coredns-7db6d8ff4d- kube-system e1bbe453-c854-45bd-a73e-26eecfb4fd84 779 0 2025-02-13 19:53:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-ppnzz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f41e108ad5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.537 [INFO][4833] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.652 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" HandleID="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Workload="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.666 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" HandleID="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Workload="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047d2c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-ppnzz", "timestamp":"2025-02-13 19:53:40.652659243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.666 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.719 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.719 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.721 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.726 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.733 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.751 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.775 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.775 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.793 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376 Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:40.925 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:41.015 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:41.015 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" host="localhost" Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:41.016 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:41.109190 containerd[1627]: 2025-02-13 19:53:41.016 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" HandleID="k8s-pod-network.c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Workload="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.022 [INFO][4833] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1bbe453-c854-45bd-a73e-26eecfb4fd84", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-ppnzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f41e108ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.022 [INFO][4833] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.022 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f41e108ad5 ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.025 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.025 [INFO][4833] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1bbe453-c854-45bd-a73e-26eecfb4fd84", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376", Pod:"coredns-7db6d8ff4d-ppnzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f41e108ad5", MAC:"fa:2e:78:67:af:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.110043 containerd[1627]: 2025-02-13 19:53:41.105 [INFO][4833] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ppnzz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--ppnzz-eth0" Feb 13 19:53:41.153817 containerd[1627]: time="2025-02-13T19:53:41.153688291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.153974 containerd[1627]: time="2025-02-13T19:53:41.153844724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.153974 containerd[1627]: time="2025-02-13T19:53:41.153878618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.154268 containerd[1627]: time="2025-02-13T19:53:41.154221542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.184486 systemd-networkd[1245]: calid7e11e3a7da: Link UP Feb 13 19:53:41.186575 systemd-networkd[1245]: calid7e11e3a7da: Gained carrier Feb 13 19:53:41.190949 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.560 [INFO][4834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.578 [INFO][4834] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0 coredns-7db6d8ff4d- kube-system 4aa420ca-11be-4692-8d69-b62bdc73431a 783 0 2025-02-13 19:53:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-gzcgv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7e11e3a7da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.578 [INFO][4834] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.701 [INFO][4967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" HandleID="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Workload="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.724 [INFO][4967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" HandleID="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Workload="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00016b060), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-gzcgv", "timestamp":"2025-02-13 19:53:40.701286454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:40.725 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.016 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.016 [INFO][4967] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.105 [INFO][4967] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.152 [INFO][4967] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.157 [INFO][4967] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.159 [INFO][4967] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.161 [INFO][4967] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.161 [INFO][4967] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.163 [INFO][4967] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.166 [INFO][4967] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4967] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4967] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" host="localhost" Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:41.202932 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" HandleID="k8s-pod-network.b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Workload="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.180 [INFO][4834] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4aa420ca-11be-4692-8d69-b62bdc73431a", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-gzcgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e11e3a7da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.180 [INFO][4834] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.181 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7e11e3a7da ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.187 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.187 [INFO][4834] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4aa420ca-11be-4692-8d69-b62bdc73431a", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a", Pod:"coredns-7db6d8ff4d-gzcgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e11e3a7da", MAC:"92:1b:e6:74:5d:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.203602 containerd[1627]: 2025-02-13 19:53:41.199 [INFO][4834] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gzcgv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gzcgv-eth0" Feb 13 19:53:41.230095 containerd[1627]: time="2025-02-13T19:53:41.230046551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ppnzz,Uid:e1bbe453-c854-45bd-a73e-26eecfb4fd84,Namespace:kube-system,Attempt:4,} returns sandbox id \"c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376\"" Feb 13 19:53:41.232727 kubelet[2837]: E0213 19:53:41.232692 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.240177 systemd-networkd[1245]: cali3fee4102085: Link UP Feb 13 19:53:41.243035 containerd[1627]: time="2025-02-13T19:53:41.242972906Z" level=info msg="CreateContainer within sandbox \"c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:53:41.244762 containerd[1627]: time="2025-02-13T19:53:41.244467272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.244762 containerd[1627]: time="2025-02-13T19:53:41.244560006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.244762 containerd[1627]: time="2025-02-13T19:53:41.244580645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.244762 containerd[1627]: time="2025-02-13T19:53:41.244715147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.244901 systemd-networkd[1245]: cali3fee4102085: Gained carrier Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.532 [INFO][4812] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.558 [INFO][4812] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0 calico-apiserver-b95766759- calico-apiserver 8d8f81e4-3b2c-4d15-9eda-1f02e5f43765 780 0 2025-02-13 19:53:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b95766759 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b95766759-6rgg4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fee4102085 [] []}} ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.559 [INFO][4812] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.681 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" HandleID="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Workload="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.725 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" HandleID="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Workload="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002938f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b95766759-6rgg4", "timestamp":"2025-02-13 19:53:40.681727324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:40.727 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.175 [INFO][4934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.179 [INFO][4934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.189 [INFO][4934] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.201 [INFO][4934] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.204 [INFO][4934] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.207 [INFO][4934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.207 [INFO][4934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.209 [INFO][4934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.216 [INFO][4934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.225 [INFO][4934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.227 [INFO][4934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" host="localhost" Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.227 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:53:41.268555 containerd[1627]: 2025-02-13 19:53:41.227 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" HandleID="k8s-pod-network.462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Workload="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.233 [INFO][4812] cni-plugin/k8s.go 386: Populated endpoint ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0", GenerateName:"calico-apiserver-b95766759-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d8f81e4-3b2c-4d15-9eda-1f02e5f43765", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b95766759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b95766759-6rgg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fee4102085", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.233 [INFO][4812] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.233 [INFO][4812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fee4102085 ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.246 [INFO][4812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.250 [INFO][4812] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0", GenerateName:"calico-apiserver-b95766759-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d8f81e4-3b2c-4d15-9eda-1f02e5f43765", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 53, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b95766759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c", Pod:"calico-apiserver-b95766759-6rgg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fee4102085", MAC:"0e:6d:53:10:6d:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:53:41.269555 containerd[1627]: 2025-02-13 19:53:41.262 [INFO][4812] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c" Namespace="calico-apiserver" Pod="calico-apiserver-b95766759-6rgg4" WorkloadEndpoint="localhost-k8s-calico--apiserver--b95766759--6rgg4-eth0" Feb 13 19:53:41.288666 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:41.290032 containerd[1627]: time="2025-02-13T19:53:41.289965852Z" level=info msg="CreateContainer within sandbox \"c3ae1723ea09b98b14cb2736e8177ecb152a1b4d23064fb8e657348f0f8bc376\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1505bc063cbba5461102b8966bf0fe9c3e7de70085dd53e2a699774427a12f87\"" Feb 13 19:53:41.291073 containerd[1627]: time="2025-02-13T19:53:41.290872283Z" level=info msg="StartContainer for \"1505bc063cbba5461102b8966bf0fe9c3e7de70085dd53e2a699774427a12f87\"" Feb 13 19:53:41.331385 containerd[1627]: time="2025-02-13T19:53:41.330764009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:41.331385 containerd[1627]: time="2025-02-13T19:53:41.330822018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:41.331385 containerd[1627]: time="2025-02-13T19:53:41.330834551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.331385 containerd[1627]: time="2025-02-13T19:53:41.330917357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:41.339465 containerd[1627]: time="2025-02-13T19:53:41.339323697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzcgv,Uid:4aa420ca-11be-4692-8d69-b62bdc73431a,Namespace:kube-system,Attempt:4,} returns sandbox id \"b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a\"" Feb 13 19:53:41.341956 kubelet[2837]: E0213 19:53:41.341334 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.346794 containerd[1627]: time="2025-02-13T19:53:41.346737053Z" level=info msg="CreateContainer within sandbox \"b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:53:41.366594 containerd[1627]: time="2025-02-13T19:53:41.366547412Z" level=info msg="CreateContainer within sandbox \"b65dc942074919ee13d180435f8021dc0d47528688d082981cbb63a3fe5a793a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca8275562e996f2eab54366894f7a3e7cfca0bed9a99cafacde8a04e17a1d749\"" Feb 13 19:53:41.368096 containerd[1627]: time="2025-02-13T19:53:41.367787720Z" level=info msg="StartContainer for \"ca8275562e996f2eab54366894f7a3e7cfca0bed9a99cafacde8a04e17a1d749\"" Feb 13 19:53:41.373934 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:53:41.397903 containerd[1627]: time="2025-02-13T19:53:41.397154688Z" level=info msg="StartContainer for \"1505bc063cbba5461102b8966bf0fe9c3e7de70085dd53e2a699774427a12f87\" returns successfully" Feb 13 19:53:41.428627 containerd[1627]: time="2025-02-13T19:53:41.428223701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b95766759-6rgg4,Uid:8d8f81e4-3b2c-4d15-9eda-1f02e5f43765,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c\"" Feb 13 19:53:41.512096 containerd[1627]: time="2025-02-13T19:53:41.511521018Z" level=info msg="StartContainer for \"ca8275562e996f2eab54366894f7a3e7cfca0bed9a99cafacde8a04e17a1d749\" returns successfully" Feb 13 19:53:41.753439 systemd-networkd[1245]: calida89fc3475a: Gained IPv6LL Feb 13 19:53:41.829780 kubelet[2837]: E0213 19:53:41.829752 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.832590 kubelet[2837]: E0213 19:53:41.832561 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:41.854136 kubelet[2837]: I0213 19:53:41.853949 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ppnzz" podStartSLOduration=28.853929921 podStartE2EDuration="28.853929921s" podCreationTimestamp="2025-02-13 19:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:41.840514477 +0000 UTC m=+43.221208076" watchObservedRunningTime="2025-02-13 19:53:41.853929921 +0000 UTC m=+43.234623520" Feb 13 19:53:41.865720 kubelet[2837]: I0213 19:53:41.865403 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gzcgv" podStartSLOduration=28.86536678 podStartE2EDuration="28.86536678s" podCreationTimestamp="2025-02-13 19:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:41.862505439 +0000 UTC m=+43.243199038" watchObservedRunningTime="2025-02-13 19:53:41.86536678 +0000 UTC m=+43.246060379" Feb 13 19:53:41.945512 systemd-networkd[1245]: calie2b3f3c84db: Gained IPv6LL Feb 13 19:53:42.073496 systemd-networkd[1245]: cali0fdbb48b136: Gained IPv6LL Feb 13 19:53:42.522500 systemd-networkd[1245]: cali3fee4102085: Gained IPv6LL Feb 13 19:53:42.587048 systemd-networkd[1245]: cali2f41e108ad5: Gained IPv6LL Feb 13 19:53:42.651065 systemd-networkd[1245]: vxlan.calico: Gained IPv6LL Feb 13 19:53:42.839879 kubelet[2837]: E0213 19:53:42.839750 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:42.839879 kubelet[2837]: E0213 19:53:42.839793 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:43.097456 systemd-networkd[1245]: calid7e11e3a7da: Gained IPv6LL Feb 13 19:53:43.428811 containerd[1627]: time="2025-02-13T19:53:43.428654705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:43.429558 containerd[1627]: time="2025-02-13T19:53:43.429510322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:53:43.430715 containerd[1627]: time="2025-02-13T19:53:43.430687190Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:43.432877 containerd[1627]: time="2025-02-13T19:53:43.432843799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:43.433574 containerd[1627]: time="2025-02-13T19:53:43.433535517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.645174363s" Feb 13 19:53:43.433682 containerd[1627]: time="2025-02-13T19:53:43.433576273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:53:43.434386 containerd[1627]: time="2025-02-13T19:53:43.434365154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:53:43.435414 containerd[1627]: time="2025-02-13T19:53:43.435375530Z" level=info msg="CreateContainer within sandbox \"23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:53:43.448992 containerd[1627]: time="2025-02-13T19:53:43.448944509Z" level=info msg="CreateContainer within sandbox \"23eaa284b8b93465b2b00a2b2ba9b10573d9084731db7f5c68e4d97cfd7ee02d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4921cada7c7508cacb0b41a3b67ead380d76f7c1ecead56f165d2ff626c79f76\"" Feb 13 19:53:43.450704 containerd[1627]: time="2025-02-13T19:53:43.449770228Z" level=info msg="StartContainer for \"4921cada7c7508cacb0b41a3b67ead380d76f7c1ecead56f165d2ff626c79f76\"" Feb 13 19:53:43.524613 containerd[1627]: time="2025-02-13T19:53:43.524560004Z" level=info msg="StartContainer for \"4921cada7c7508cacb0b41a3b67ead380d76f7c1ecead56f165d2ff626c79f76\" returns successfully" Feb 13 19:53:43.844665 kubelet[2837]: E0213 19:53:43.844427 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:43.844665 kubelet[2837]: E0213 19:53:43.844549 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:43.971963 kubelet[2837]: I0213 19:53:43.971511 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b95766759-24q6w" podStartSLOduration=20.321744591 podStartE2EDuration="22.971490536s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:40.784473435 +0000 UTC m=+42.165167034" lastFinishedPulling="2025-02-13 19:53:43.43421938 +0000 UTC m=+44.814912979" observedRunningTime="2025-02-13 19:53:43.970840046 +0000 UTC m=+45.351533645" watchObservedRunningTime="2025-02-13 19:53:43.971490536 +0000 UTC m=+45.352184296" Feb 13 19:53:44.430554 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:45490.service - OpenSSH per-connection server daemon (10.0.0.1:45490). Feb 13 19:53:44.489607 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 45490 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:44.491685 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:44.496929 systemd-logind[1584]: New session 11 of user core. Feb 13 19:53:44.502729 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:53:44.636920 sshd[5471]: Connection closed by 10.0.0.1 port 45490 Feb 13 19:53:44.637388 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:44.647600 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:45494.service - OpenSSH per-connection server daemon (10.0.0.1:45494). Feb 13 19:53:44.648557 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:45490.service: Deactivated successfully. Feb 13 19:53:44.651325 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:53:44.653844 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:53:44.655680 systemd-logind[1584]: Removed session 11. Feb 13 19:53:44.687715 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 45494 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:44.689237 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:44.693569 systemd-logind[1584]: New session 12 of user core. Feb 13 19:53:44.701466 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:53:44.848170 kubelet[2837]: I0213 19:53:44.847476 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:44.858271 sshd[5488]: Connection closed by 10.0.0.1 port 45494 Feb 13 19:53:44.858371 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:44.868501 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:45510.service - OpenSSH per-connection server daemon (10.0.0.1:45510). Feb 13 19:53:44.869941 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:45494.service: Deactivated successfully. Feb 13 19:53:44.877560 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:53:44.884742 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:53:44.886554 systemd-logind[1584]: Removed session 12. Feb 13 19:53:44.918367 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 45510 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:44.920154 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:44.924404 systemd-logind[1584]: New session 13 of user core. Feb 13 19:53:44.932501 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:53:45.056674 sshd[5502]: Connection closed by 10.0.0.1 port 45510 Feb 13 19:53:45.057067 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:45.061351 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:45510.service: Deactivated successfully. Feb 13 19:53:45.063966 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:53:45.064729 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:53:45.065765 systemd-logind[1584]: Removed session 13. Feb 13 19:53:46.111843 containerd[1627]: time="2025-02-13T19:53:46.111781645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:46.112858 containerd[1627]: time="2025-02-13T19:53:46.112780670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:53:46.113939 containerd[1627]: time="2025-02-13T19:53:46.113902165Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:46.116392 containerd[1627]: time="2025-02-13T19:53:46.116346041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:46.116949 containerd[1627]: time="2025-02-13T19:53:46.116905610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.682512343s" Feb 13 19:53:46.116984 containerd[1627]: time="2025-02-13T19:53:46.116947248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:53:46.118576 containerd[1627]: time="2025-02-13T19:53:46.118548824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:53:46.147249 containerd[1627]: time="2025-02-13T19:53:46.147125772Z" level=info msg="CreateContainer within sandbox \"a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:53:46.162800 containerd[1627]: time="2025-02-13T19:53:46.162712692Z" level=info msg="CreateContainer within sandbox \"a227dc0e318c9da4ee1a63d97719392c71e3365b057c41212685da02b66175dd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"32e991700963ddcca385074f66bebd3cc765e1e75884cb78b487d21255e33ee7\"" Feb 13 19:53:46.163511 containerd[1627]: time="2025-02-13T19:53:46.163482837Z" level=info msg="StartContainer for \"32e991700963ddcca385074f66bebd3cc765e1e75884cb78b487d21255e33ee7\"" Feb 13 19:53:46.251015 containerd[1627]: time="2025-02-13T19:53:46.250955819Z" level=info msg="StartContainer for \"32e991700963ddcca385074f66bebd3cc765e1e75884cb78b487d21255e33ee7\" returns successfully" Feb 13 19:53:46.973420 kubelet[2837]: I0213 19:53:46.973281 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ddcf7667d-wjflz" podStartSLOduration=20.66186262 podStartE2EDuration="25.973262479s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:40.806674525 +0000 UTC m=+42.187368124" lastFinishedPulling="2025-02-13 19:53:46.118074383 +0000 UTC m=+47.498767983" observedRunningTime="2025-02-13 19:53:46.890105207 +0000 UTC m=+48.270798816" watchObservedRunningTime="2025-02-13 19:53:46.973262479 +0000 UTC m=+48.353956078" Feb 13 19:53:48.506296 containerd[1627]: time="2025-02-13T19:53:48.506232291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:48.507272 containerd[1627]: time="2025-02-13T19:53:48.507228581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:53:48.508527 containerd[1627]: time="2025-02-13T19:53:48.508491421Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:48.510999 containerd[1627]: time="2025-02-13T19:53:48.510961956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:48.511614 containerd[1627]: time="2025-02-13T19:53:48.511581639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.392872084s" Feb 13 19:53:48.511665 containerd[1627]: time="2025-02-13T19:53:48.511612216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:53:48.512726 containerd[1627]: time="2025-02-13T19:53:48.512685711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:53:48.514061 containerd[1627]: time="2025-02-13T19:53:48.514031716Z" level=info msg="CreateContainer within sandbox \"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:53:48.539306 containerd[1627]: time="2025-02-13T19:53:48.539255717Z" level=info msg="CreateContainer within sandbox \"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cf2ec0aa6d786521c491b8f99f83cb8eb4303cb8ddd29628343270dc57ab7936\"" Feb 13 19:53:48.539974 containerd[1627]: time="2025-02-13T19:53:48.539834954Z" level=info msg="StartContainer for \"cf2ec0aa6d786521c491b8f99f83cb8eb4303cb8ddd29628343270dc57ab7936\"" Feb 13 19:53:48.618921 containerd[1627]: time="2025-02-13T19:53:48.618855057Z" level=info msg="StartContainer for \"cf2ec0aa6d786521c491b8f99f83cb8eb4303cb8ddd29628343270dc57ab7936\" returns successfully" Feb 13 19:53:49.137441 kubelet[2837]: I0213 19:53:49.137382 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:49.578989 containerd[1627]: time="2025-02-13T19:53:49.578932692Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:49.614231 containerd[1627]: time="2025-02-13T19:53:49.611455245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:53:49.614404 containerd[1627]: time="2025-02-13T19:53:49.614228088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.101446577s" Feb 13 19:53:49.614404 containerd[1627]: time="2025-02-13T19:53:49.614275407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:53:49.616081 containerd[1627]: time="2025-02-13T19:53:49.616030750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:53:49.618261 containerd[1627]: time="2025-02-13T19:53:49.617732353Z" level=info msg="CreateContainer within sandbox \"462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:53:49.644108 containerd[1627]: time="2025-02-13T19:53:49.644070672Z" level=info msg="CreateContainer within sandbox \"462667cccfdc993552fb9c8e26be01c0eddae22e544d79e2db8ec7826a0fd91c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49f87b3021d396ad78193bdba5e69653f63e5fdf507d44b96e6f877df5a0aa14\"" Feb 13 19:53:49.644631 containerd[1627]: time="2025-02-13T19:53:49.644602139Z" level=info msg="StartContainer for \"49f87b3021d396ad78193bdba5e69653f63e5fdf507d44b96e6f877df5a0aa14\"" Feb 13 19:53:49.746030 containerd[1627]: time="2025-02-13T19:53:49.745973534Z" level=info msg="StartContainer for \"49f87b3021d396ad78193bdba5e69653f63e5fdf507d44b96e6f877df5a0aa14\" returns successfully" Feb 13 19:53:49.881341 kubelet[2837]: I0213 19:53:49.879683 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b95766759-6rgg4" podStartSLOduration=20.694289516 podStartE2EDuration="28.879667236s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:41.430435863 +0000 UTC m=+42.811129462" lastFinishedPulling="2025-02-13 19:53:49.615813583 +0000 UTC m=+50.996507182" observedRunningTime="2025-02-13 19:53:49.879126781 +0000 UTC m=+51.259820381" watchObservedRunningTime="2025-02-13 19:53:49.879667236 +0000 UTC m=+51.260360835" Feb 13 19:53:50.065500 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:54838.service - OpenSSH per-connection server daemon (10.0.0.1:54838). Feb 13 19:53:50.129269 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 54838 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:50.130957 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:50.134995 systemd-logind[1584]: New session 14 of user core. Feb 13 19:53:50.140495 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:53:50.272902 sshd[5669]: Connection closed by 10.0.0.1 port 54838 Feb 13 19:53:50.273304 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:50.278533 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:54838.service: Deactivated successfully. Feb 13 19:53:50.281793 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:53:50.281818 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:53:50.283610 systemd-logind[1584]: Removed session 14. Feb 13 19:53:50.868448 kubelet[2837]: I0213 19:53:50.868417 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:53:51.917144 containerd[1627]: time="2025-02-13T19:53:51.917080231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:51.917978 containerd[1627]: time="2025-02-13T19:53:51.917938642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:53:51.919263 containerd[1627]: time="2025-02-13T19:53:51.919230765Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:51.921612 containerd[1627]: time="2025-02-13T19:53:51.921579723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:53:51.922220 containerd[1627]: time="2025-02-13T19:53:51.922172535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.306101901s" Feb 13 19:53:51.922220 containerd[1627]: time="2025-02-13T19:53:51.922213452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:53:51.924128 containerd[1627]: time="2025-02-13T19:53:51.923978103Z" level=info msg="CreateContainer within sandbox \"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:53:51.939996 containerd[1627]: time="2025-02-13T19:53:51.939953395Z" level=info msg="CreateContainer within sandbox \"5c7322138364b4ecd3f5d245d090e375b3bda04767641cf77429cf0ac8453752\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"272a071f3c68affc468f52443b9014e50560ba6e80539a7284ea65c91ea1230a\"" Feb 13 19:53:51.940789 containerd[1627]: time="2025-02-13T19:53:51.940751952Z" level=info msg="StartContainer for \"272a071f3c68affc468f52443b9014e50560ba6e80539a7284ea65c91ea1230a\"" Feb 13 19:53:52.009705 containerd[1627]: time="2025-02-13T19:53:52.009660536Z" level=info msg="StartContainer for \"272a071f3c68affc468f52443b9014e50560ba6e80539a7284ea65c91ea1230a\" returns successfully" Feb 13 19:53:52.804195 kubelet[2837]: I0213 19:53:52.804143 2837 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:53:52.804195 kubelet[2837]: I0213 19:53:52.804179 2837 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:53:52.889167 kubelet[2837]: I0213 19:53:52.888973 2837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jzmps" podStartSLOduration=21.072732168 podStartE2EDuration="31.88895616s" podCreationTimestamp="2025-02-13 19:53:21 +0000 UTC" firstStartedPulling="2025-02-13 19:53:41.106610155 +0000 UTC m=+42.487303754" lastFinishedPulling="2025-02-13 19:53:51.922834147 +0000 UTC m=+53.303527746" observedRunningTime="2025-02-13 19:53:52.88871107 +0000 UTC m=+54.269404669" watchObservedRunningTime="2025-02-13 19:53:52.88895616 +0000 UTC m=+54.269649759" Feb 13 19:53:55.285597 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). Feb 13 19:53:55.326665 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:53:55.330273 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:55.334290 systemd-logind[1584]: New session 15 of user core. Feb 13 19:53:55.341477 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:53:55.473317 sshd[5734]: Connection closed by 10.0.0.1 port 54846 Feb 13 19:53:55.473743 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:55.479051 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:54846.service: Deactivated successfully. Feb 13 19:53:55.482152 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:53:55.482993 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:53:55.484134 systemd-logind[1584]: Removed session 15. Feb 13 19:53:58.042122 kubelet[2837]: E0213 19:53:58.042088 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:53:58.715317 containerd[1627]: time="2025-02-13T19:53:58.715282404Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:58.715938 containerd[1627]: time="2025-02-13T19:53:58.715403668Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:58.715938 containerd[1627]: time="2025-02-13T19:53:58.715416473Z" level=info msg="StopPodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:58.722635 containerd[1627]: time="2025-02-13T19:53:58.722558559Z" level=info msg="RemovePodSandbox for \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:58.733105 containerd[1627]: time="2025-02-13T19:53:58.733062548Z" level=info msg="Forcibly stopping sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\"" Feb 13 19:53:58.733252 containerd[1627]: time="2025-02-13T19:53:58.733182260Z" level=info msg="TearDown network for sandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" successfully" Feb 13 19:53:58.828378 containerd[1627]: time="2025-02-13T19:53:58.828319749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.828523 containerd[1627]: time="2025-02-13T19:53:58.828412247Z" level=info msg="RemovePodSandbox \"1c3d831f7a9283c53e6dccdbdad537d2a1010d7aad3df45be7bf9eddaefaeda7\" returns successfully" Feb 13 19:53:58.828957 containerd[1627]: time="2025-02-13T19:53:58.828923314Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:58.829090 containerd[1627]: time="2025-02-13T19:53:58.829047524Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:58.829090 containerd[1627]: time="2025-02-13T19:53:58.829061842Z" level=info msg="StopPodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:58.829444 containerd[1627]: time="2025-02-13T19:53:58.829419282Z" level=info msg="RemovePodSandbox for \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:58.829514 containerd[1627]: time="2025-02-13T19:53:58.829447006Z" level=info msg="Forcibly stopping sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\"" Feb 13 19:53:58.829542 containerd[1627]: time="2025-02-13T19:53:58.829531550Z" level=info msg="TearDown network for sandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" successfully" Feb 13 19:53:58.833853 containerd[1627]: time="2025-02-13T19:53:58.833817037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.833919 containerd[1627]: time="2025-02-13T19:53:58.833856042Z" level=info msg="RemovePodSandbox \"aa7c0c6034822ce4329cd6427e89b9df586cd9d0d9ffec84f29e313bc88bde35\" returns successfully" Feb 13 19:53:58.834178 containerd[1627]: time="2025-02-13T19:53:58.834125793Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:58.834265 containerd[1627]: time="2025-02-13T19:53:58.834251947Z" level=info msg="TearDown network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" successfully" Feb 13 19:53:58.834307 containerd[1627]: time="2025-02-13T19:53:58.834265603Z" level=info msg="StopPodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" returns successfully" Feb 13 19:53:58.834660 containerd[1627]: time="2025-02-13T19:53:58.834626701Z" level=info msg="RemovePodSandbox for \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:58.834713 containerd[1627]: time="2025-02-13T19:53:58.834667039Z" level=info msg="Forcibly stopping sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\"" Feb 13 19:53:58.834813 containerd[1627]: time="2025-02-13T19:53:58.834763535Z" level=info msg="TearDown network for sandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" successfully" Feb 13 19:53:58.838698 containerd[1627]: time="2025-02-13T19:53:58.838660823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.838749 containerd[1627]: time="2025-02-13T19:53:58.838706081Z" level=info msg="RemovePodSandbox \"c964be9c80d844d935e3a65ad5b9773bdef38e566d0daf23e63ae9940ec4c70c\" returns successfully" Feb 13 19:53:58.839043 containerd[1627]: time="2025-02-13T19:53:58.839000319Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" Feb 13 19:53:58.839178 containerd[1627]: time="2025-02-13T19:53:58.839117746Z" level=info msg="TearDown network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" successfully" Feb 13 19:53:58.839178 containerd[1627]: time="2025-02-13T19:53:58.839131162Z" level=info msg="StopPodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" returns successfully" Feb 13 19:53:58.839489 containerd[1627]: time="2025-02-13T19:53:58.839466920Z" level=info msg="RemovePodSandbox for \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" Feb 13 19:53:58.839534 containerd[1627]: time="2025-02-13T19:53:58.839492450Z" level=info msg="Forcibly stopping sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\"" Feb 13 19:53:58.839620 containerd[1627]: time="2025-02-13T19:53:58.839574849Z" level=info msg="TearDown network for sandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" successfully" Feb 13 19:53:58.843301 containerd[1627]: time="2025-02-13T19:53:58.843272251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.843377 containerd[1627]: time="2025-02-13T19:53:58.843311346Z" level=info msg="RemovePodSandbox \"e2f8ceb2d4622eba65ab7445182efc70debce19153e9cb93682362db4738b48e\" returns successfully" Feb 13 19:53:58.843554 containerd[1627]: time="2025-02-13T19:53:58.843532364Z" level=info msg="StopPodSandbox for \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\"" Feb 13 19:53:58.843641 containerd[1627]: time="2025-02-13T19:53:58.843625473Z" level=info msg="TearDown network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" successfully" Feb 13 19:53:58.843682 containerd[1627]: time="2025-02-13T19:53:58.843640522Z" level=info msg="StopPodSandbox for \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" returns successfully" Feb 13 19:53:58.843903 containerd[1627]: time="2025-02-13T19:53:58.843881428Z" level=info msg="RemovePodSandbox for \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\"" Feb 13 19:53:58.843946 containerd[1627]: time="2025-02-13T19:53:58.843907979Z" level=info msg="Forcibly stopping sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\"" Feb 13 19:53:58.844015 containerd[1627]: time="2025-02-13T19:53:58.843982322Z" level=info msg="TearDown network for sandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" successfully" Feb 13 19:53:58.847642 containerd[1627]: time="2025-02-13T19:53:58.847611212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.847700 containerd[1627]: time="2025-02-13T19:53:58.847650979Z" level=info msg="RemovePodSandbox \"88da5ba8d07cbe91db53cb8c9568daa8928e500610407bab7a9e69d065fc2b69\" returns successfully" Feb 13 19:53:58.847923 containerd[1627]: time="2025-02-13T19:53:58.847898146Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:58.848036 containerd[1627]: time="2025-02-13T19:53:58.848006306Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:58.848036 containerd[1627]: time="2025-02-13T19:53:58.848023348Z" level=info msg="StopPodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:58.848309 containerd[1627]: time="2025-02-13T19:53:58.848289202Z" level=info msg="RemovePodSandbox for \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:58.848396 containerd[1627]: time="2025-02-13T19:53:58.848373063Z" level=info msg="Forcibly stopping sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\"" Feb 13 19:53:58.848494 containerd[1627]: time="2025-02-13T19:53:58.848456815Z" level=info msg="TearDown network for sandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" successfully" Feb 13 19:53:58.852188 containerd[1627]: time="2025-02-13T19:53:58.852156752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.852270 containerd[1627]: time="2025-02-13T19:53:58.852195337Z" level=info msg="RemovePodSandbox \"c941aa95e2f55483dc93a0a5c6234362e2018d76effc59ea781368fc4a615dc7\" returns successfully" Feb 13 19:53:58.852480 containerd[1627]: time="2025-02-13T19:53:58.852456441Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:58.852563 containerd[1627]: time="2025-02-13T19:53:58.852541506Z" level=info msg="TearDown network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" successfully" Feb 13 19:53:58.852563 containerd[1627]: time="2025-02-13T19:53:58.852558068Z" level=info msg="StopPodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" returns successfully" Feb 13 19:53:58.852841 containerd[1627]: time="2025-02-13T19:53:58.852816877Z" level=info msg="RemovePodSandbox for \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:58.852883 containerd[1627]: time="2025-02-13T19:53:58.852845763Z" level=info msg="Forcibly stopping sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\"" Feb 13 19:53:58.852952 containerd[1627]: time="2025-02-13T19:53:58.852918814Z" level=info msg="TearDown network for sandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" successfully" Feb 13 19:53:58.856522 containerd[1627]: time="2025-02-13T19:53:58.856493519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.856598 containerd[1627]: time="2025-02-13T19:53:58.856532875Z" level=info msg="RemovePodSandbox \"b81b91989845bc5cf16d36ef4cbcc804e6f22d498ea79415bf780d04ef2a4b3c\" returns successfully" Feb 13 19:53:58.856822 containerd[1627]: time="2025-02-13T19:53:58.856797296Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" Feb 13 19:53:58.856914 containerd[1627]: time="2025-02-13T19:53:58.856898331Z" level=info msg="TearDown network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" successfully" Feb 13 19:53:58.856948 containerd[1627]: time="2025-02-13T19:53:58.856915615Z" level=info msg="StopPodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" returns successfully" Feb 13 19:53:58.857192 containerd[1627]: time="2025-02-13T19:53:58.857166379Z" level=info msg="RemovePodSandbox for \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" Feb 13 19:53:58.857192 containerd[1627]: time="2025-02-13T19:53:58.857188101Z" level=info msg="Forcibly stopping sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\"" Feb 13 19:53:58.857333 containerd[1627]: time="2025-02-13T19:53:58.857297322Z" level=info msg="TearDown network for sandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" successfully" Feb 13 19:53:58.860833 containerd[1627]: time="2025-02-13T19:53:58.860804225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.860899 containerd[1627]: time="2025-02-13T19:53:58.860868801Z" level=info msg="RemovePodSandbox \"8e746b8894b2f1ba454116cce5a966b280b8b4523a03b741f8839a703f713763\" returns successfully" Feb 13 19:53:58.861163 containerd[1627]: time="2025-02-13T19:53:58.861136337Z" level=info msg="StopPodSandbox for \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\"" Feb 13 19:53:58.861286 containerd[1627]: time="2025-02-13T19:53:58.861257421Z" level=info msg="TearDown network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" successfully" Feb 13 19:53:58.861286 containerd[1627]: time="2025-02-13T19:53:58.861270617Z" level=info msg="StopPodSandbox for \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" returns successfully" Feb 13 19:53:58.861508 containerd[1627]: time="2025-02-13T19:53:58.861478098Z" level=info msg="RemovePodSandbox for \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\"" Feb 13 19:53:58.861508 containerd[1627]: time="2025-02-13T19:53:58.861503637Z" level=info msg="Forcibly stopping sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\"" Feb 13 19:53:58.861622 containerd[1627]: time="2025-02-13T19:53:58.861589793Z" level=info msg="TearDown network for sandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" successfully" Feb 13 19:53:58.865640 containerd[1627]: time="2025-02-13T19:53:58.865588618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.865640 containerd[1627]: time="2025-02-13T19:53:58.865636439Z" level=info msg="RemovePodSandbox \"e73c609d524b9aaac4f5e09151ed662f5c36b9b9f646077f9f97500b7aeac917\" returns successfully" Feb 13 19:53:58.865967 containerd[1627]: time="2025-02-13T19:53:58.865940978Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:58.867324 containerd[1627]: time="2025-02-13T19:53:58.867287168Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:58.867324 containerd[1627]: time="2025-02-13T19:53:58.867305323Z" level=info msg="StopPodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:58.867660 containerd[1627]: time="2025-02-13T19:53:58.867625401Z" level=info msg="RemovePodSandbox for \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:58.867704 containerd[1627]: time="2025-02-13T19:53:58.867664908Z" level=info msg="Forcibly stopping sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\"" Feb 13 19:53:58.867812 containerd[1627]: time="2025-02-13T19:53:58.867763949Z" level=info msg="TearDown network for sandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" successfully" Feb 13 19:53:58.871554 containerd[1627]: time="2025-02-13T19:53:58.871518622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.871614 containerd[1627]: time="2025-02-13T19:53:58.871583467Z" level=info msg="RemovePodSandbox \"a310912ade2db6e0ff63dbfcd39e4ea181feb652d645d5cd3c6c13d2bd91f6ee\" returns successfully" Feb 13 19:53:58.871880 containerd[1627]: time="2025-02-13T19:53:58.871858057Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:58.871952 containerd[1627]: time="2025-02-13T19:53:58.871935798Z" level=info msg="TearDown network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" successfully" Feb 13 19:53:58.871952 containerd[1627]: time="2025-02-13T19:53:58.871948633Z" level=info msg="StopPodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" returns successfully" Feb 13 19:53:58.872265 containerd[1627]: time="2025-02-13T19:53:58.872239364Z" level=info msg="RemovePodSandbox for \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:58.872265 containerd[1627]: time="2025-02-13T19:53:58.872266086Z" level=info msg="Forcibly stopping sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\"" Feb 13 19:53:58.872363 containerd[1627]: time="2025-02-13T19:53:58.872337344Z" level=info msg="TearDown network for sandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" successfully" Feb 13 19:53:58.875794 containerd[1627]: time="2025-02-13T19:53:58.875762981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.875845 containerd[1627]: time="2025-02-13T19:53:58.875800984Z" level=info msg="RemovePodSandbox \"4525eb078c4585af7ff3ca38e51ace926e48fb8219eb9c2c05835b9de39aa4c6\" returns successfully" Feb 13 19:53:58.876095 containerd[1627]: time="2025-02-13T19:53:58.876075575Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" Feb 13 19:53:58.876182 containerd[1627]: time="2025-02-13T19:53:58.876158384Z" level=info msg="TearDown network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" successfully" Feb 13 19:53:58.876182 containerd[1627]: time="2025-02-13T19:53:58.876173062Z" level=info msg="StopPodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" returns successfully" Feb 13 19:53:58.876454 containerd[1627]: time="2025-02-13T19:53:58.876427334Z" level=info msg="RemovePodSandbox for \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" Feb 13 19:53:58.876454 containerd[1627]: time="2025-02-13T19:53:58.876451861Z" level=info msg="Forcibly stopping sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\"" Feb 13 19:53:58.876549 containerd[1627]: time="2025-02-13T19:53:58.876514122Z" level=info msg="TearDown network for sandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" successfully" Feb 13 19:53:58.880079 containerd[1627]: time="2025-02-13T19:53:58.880042728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.880079 containerd[1627]: time="2025-02-13T19:53:58.880075100Z" level=info msg="RemovePodSandbox \"a7c780dd203bde4d37912d83b913d98310fdc35becaa57c651e39d557be8c826\" returns successfully" Feb 13 19:53:58.880336 containerd[1627]: time="2025-02-13T19:53:58.880304574Z" level=info msg="StopPodSandbox for \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\"" Feb 13 19:53:58.880405 containerd[1627]: time="2025-02-13T19:53:58.880384858Z" level=info msg="TearDown network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" successfully" Feb 13 19:53:58.880405 containerd[1627]: time="2025-02-13T19:53:58.880398635Z" level=info msg="StopPodSandbox for \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" returns successfully" Feb 13 19:53:58.880694 containerd[1627]: time="2025-02-13T19:53:58.880670109Z" level=info msg="RemovePodSandbox for \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\"" Feb 13 19:53:58.880735 containerd[1627]: time="2025-02-13T19:53:58.880700167Z" level=info msg="Forcibly stopping sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\"" Feb 13 19:53:58.880829 containerd[1627]: time="2025-02-13T19:53:58.880781284Z" level=info msg="TearDown network for sandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" successfully" Feb 13 19:53:58.884660 containerd[1627]: time="2025-02-13T19:53:58.884631982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.884725 containerd[1627]: time="2025-02-13T19:53:58.884674604Z" level=info msg="RemovePodSandbox \"0c72de389134e5ddcbc89ed34653f1b5426aa98e7fffea352b0fa235f17db2db\" returns successfully" Feb 13 19:53:58.884981 containerd[1627]: time="2025-02-13T19:53:58.884950156Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:58.885067 containerd[1627]: time="2025-02-13T19:53:58.885045711Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:58.885099 containerd[1627]: time="2025-02-13T19:53:58.885064999Z" level=info msg="StopPodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:58.885374 containerd[1627]: time="2025-02-13T19:53:58.885330251Z" level=info msg="RemovePodSandbox for \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:58.885374 containerd[1627]: time="2025-02-13T19:53:58.885359066Z" level=info msg="Forcibly stopping sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\"" Feb 13 19:53:58.885467 containerd[1627]: time="2025-02-13T19:53:58.885431436Z" level=info msg="TearDown network for sandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" successfully" Feb 13 19:53:58.889485 containerd[1627]: time="2025-02-13T19:53:58.889451671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.889485 containerd[1627]: time="2025-02-13T19:53:58.889491008Z" level=info msg="RemovePodSandbox \"7bec5c3f8a7896263a28cbef66ca1f3edb012ac893f53c4d222a485c5996552b\" returns successfully" Feb 13 19:53:58.889793 containerd[1627]: time="2025-02-13T19:53:58.889758875Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:58.889867 containerd[1627]: time="2025-02-13T19:53:58.889855071Z" level=info msg="TearDown network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" successfully" Feb 13 19:53:58.889899 containerd[1627]: time="2025-02-13T19:53:58.889866824Z" level=info msg="StopPodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" returns successfully" Feb 13 19:53:58.890181 containerd[1627]: time="2025-02-13T19:53:58.890157134Z" level=info msg="RemovePodSandbox for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:58.890255 containerd[1627]: time="2025-02-13T19:53:58.890183726Z" level=info msg="Forcibly stopping sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\"" Feb 13 19:53:58.890321 containerd[1627]: time="2025-02-13T19:53:58.890286344Z" level=info msg="TearDown network for sandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" successfully" Feb 13 19:53:58.896717 containerd[1627]: time="2025-02-13T19:53:58.896688020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.896806 containerd[1627]: time="2025-02-13T19:53:58.896732996Z" level=info msg="RemovePodSandbox \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" returns successfully" Feb 13 19:53:58.897051 containerd[1627]: time="2025-02-13T19:53:58.897018438Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" Feb 13 19:53:58.897144 containerd[1627]: time="2025-02-13T19:53:58.897124603Z" level=info msg="TearDown network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" successfully" Feb 13 19:53:58.897144 containerd[1627]: time="2025-02-13T19:53:58.897140162Z" level=info msg="StopPodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" returns successfully" Feb 13 19:53:58.897530 containerd[1627]: time="2025-02-13T19:53:58.897493355Z" level=error msg="PodSandboxStatus for \"eba2530f53cb03cb6a222f61aeda72ce716b64f6984b78fe6c0a6269926bef27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox: not found" Feb 13 19:53:58.897616 containerd[1627]: time="2025-02-13T19:53:58.897494668Z" level=info msg="RemovePodSandbox for \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" Feb 13 19:53:58.897616 containerd[1627]: time="2025-02-13T19:53:58.897559022Z" level=info msg="Forcibly stopping sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\"" Feb 13 19:53:58.897677 containerd[1627]: time="2025-02-13T19:53:58.897634628Z" level=info msg="TearDown network for sandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" successfully" Feb 13 19:53:58.901445 containerd[1627]: time="2025-02-13T19:53:58.901415802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.901507 containerd[1627]: time="2025-02-13T19:53:58.901449417Z" level=info msg="RemovePodSandbox \"db6449b2a7ef470c042e3a9498f59eeadf66c6e995e6f718c2ee92e4f60c79ae\" returns successfully" Feb 13 19:53:58.901674 containerd[1627]: time="2025-02-13T19:53:58.901650034Z" level=info msg="StopPodSandbox for \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\"" Feb 13 19:53:58.901764 containerd[1627]: time="2025-02-13T19:53:58.901735179Z" level=info msg="TearDown network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" successfully" Feb 13 19:53:58.901764 containerd[1627]: time="2025-02-13T19:53:58.901745359Z" level=info msg="StopPodSandbox for \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" returns successfully" Feb 13 19:53:58.901962 containerd[1627]: time="2025-02-13T19:53:58.901943391Z" level=info msg="RemovePodSandbox for \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\"" Feb 13 19:53:58.902001 containerd[1627]: time="2025-02-13T19:53:58.901967738Z" level=info msg="Forcibly stopping sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\"" Feb 13 19:53:58.902055 containerd[1627]: time="2025-02-13T19:53:58.902028345Z" level=info msg="TearDown network for sandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" successfully" Feb 13 19:53:58.906011 containerd[1627]: time="2025-02-13T19:53:58.905979888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.906077 containerd[1627]: time="2025-02-13T19:53:58.906025556Z" level=info msg="RemovePodSandbox \"7413348adc9b443ed0d168b42a9c546fc60794d406321e6e1995e98e91c76ca6\" returns successfully" Feb 13 19:53:58.906361 containerd[1627]: time="2025-02-13T19:53:58.906334513Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:58.906440 containerd[1627]: time="2025-02-13T19:53:58.906417423Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:58.906440 containerd[1627]: time="2025-02-13T19:53:58.906433895Z" level=info msg="StopPodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:58.906667 containerd[1627]: time="2025-02-13T19:53:58.906644903Z" level=info msg="RemovePodSandbox for \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:58.906692 containerd[1627]: time="2025-02-13T19:53:58.906670512Z" level=info msg="Forcibly stopping sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\"" Feb 13 19:53:58.906762 containerd[1627]: time="2025-02-13T19:53:58.906736920Z" level=info msg="TearDown network for sandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" successfully" Feb 13 19:53:58.910346 containerd[1627]: time="2025-02-13T19:53:58.910303038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.910395 containerd[1627]: time="2025-02-13T19:53:58.910340350Z" level=info msg="RemovePodSandbox \"cc83a73af3ab137dc3c9eaf93e57383c2b714499b75bb107a23374cd8c0abcec\" returns successfully" Feb 13 19:53:58.910594 containerd[1627]: time="2025-02-13T19:53:58.910566217Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:58.910657 containerd[1627]: time="2025-02-13T19:53:58.910641793Z" level=info msg="TearDown network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" successfully" Feb 13 19:53:58.910657 containerd[1627]: time="2025-02-13T19:53:58.910652474Z" level=info msg="StopPodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" returns successfully" Feb 13 19:53:58.910937 containerd[1627]: time="2025-02-13T19:53:58.910910593Z" level=info msg="RemovePodSandbox for \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:58.910995 containerd[1627]: time="2025-02-13T19:53:58.910940940Z" level=info msg="Forcibly stopping sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\"" Feb 13 19:53:58.911055 containerd[1627]: time="2025-02-13T19:53:58.911014172Z" level=info msg="TearDown network for sandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" successfully" Feb 13 19:53:58.915050 containerd[1627]: time="2025-02-13T19:53:58.915023026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.915112 containerd[1627]: time="2025-02-13T19:53:58.915076960Z" level=info msg="RemovePodSandbox \"c1fdf92c00f54716adf8b5e51cfe40faf9b25da501fc86c6a2990360ae2c6749\" returns successfully" Feb 13 19:53:58.915528 containerd[1627]: time="2025-02-13T19:53:58.915480319Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" Feb 13 19:53:58.915687 containerd[1627]: time="2025-02-13T19:53:58.915659996Z" level=info msg="TearDown network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" successfully" Feb 13 19:53:58.915687 containerd[1627]: time="2025-02-13T19:53:58.915682149Z" level=info msg="StopPodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" returns successfully" Feb 13 19:53:58.915961 containerd[1627]: time="2025-02-13T19:53:58.915938985Z" level=info msg="RemovePodSandbox for \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" Feb 13 19:53:58.915961 containerd[1627]: time="2025-02-13T19:53:58.915964434Z" level=info msg="Forcibly stopping sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\"" Feb 13 19:53:58.916086 containerd[1627]: time="2025-02-13T19:53:58.916040380Z" level=info msg="TearDown network for sandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" successfully" Feb 13 19:53:58.920333 containerd[1627]: time="2025-02-13T19:53:58.920308125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.920393 containerd[1627]: time="2025-02-13T19:53:58.920351799Z" level=info msg="RemovePodSandbox \"03e6ccc4cf299f1e0d18c709ca495fca9ece1a2697581861b200259914ef94ba\" returns successfully" Feb 13 19:53:58.920879 containerd[1627]: time="2025-02-13T19:53:58.920711213Z" level=info msg="StopPodSandbox for \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\"" Feb 13 19:53:58.920879 containerd[1627]: time="2025-02-13T19:53:58.920809884Z" level=info msg="TearDown network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" successfully" Feb 13 19:53:58.920879 containerd[1627]: time="2025-02-13T19:53:58.920823250Z" level=info msg="StopPodSandbox for \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" returns successfully" Feb 13 19:53:58.921114 containerd[1627]: time="2025-02-13T19:53:58.921083072Z" level=info msg="RemovePodSandbox for \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\"" Feb 13 19:53:58.921114 containerd[1627]: time="2025-02-13T19:53:58.921105134Z" level=info msg="Forcibly stopping sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\"" Feb 13 19:53:58.921315 containerd[1627]: time="2025-02-13T19:53:58.921186912Z" level=info msg="TearDown network for sandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" successfully" Feb 13 19:53:58.927561 containerd[1627]: time="2025-02-13T19:53:58.927503203Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.928001 containerd[1627]: time="2025-02-13T19:53:58.927965105Z" level=info msg="RemovePodSandbox \"5f3c75d4c374473424cecb60da4e9c0e92ab9260e1940fbb0e93bff7102bff51\" returns successfully" Feb 13 19:53:58.928549 containerd[1627]: time="2025-02-13T19:53:58.928516991Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:58.928643 containerd[1627]: time="2025-02-13T19:53:58.928625670Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:58.928643 containerd[1627]: time="2025-02-13T19:53:58.928640048Z" level=info msg="StopPodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:58.928985 containerd[1627]: time="2025-02-13T19:53:58.928951110Z" level=info msg="RemovePodSandbox for \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:58.928985 containerd[1627]: time="2025-02-13T19:53:58.928982250Z" level=info msg="Forcibly stopping sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\"" Feb 13 19:53:58.929097 containerd[1627]: time="2025-02-13T19:53:58.929049990Z" level=info msg="TearDown network for sandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" successfully" Feb 13 19:53:58.932899 containerd[1627]: time="2025-02-13T19:53:58.932860571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.932899 containerd[1627]: time="2025-02-13T19:53:58.932904215Z" level=info msg="RemovePodSandbox \"a895470eef32e06938d0910e036a3ac27821e41a0aac1b7fb3735c6fe2c71b64\" returns successfully" Feb 13 19:53:58.933321 containerd[1627]: time="2025-02-13T19:53:58.933288167Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:58.933462 containerd[1627]: time="2025-02-13T19:53:58.933428328Z" level=info msg="TearDown network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" successfully" Feb 13 19:53:58.933462 containerd[1627]: time="2025-02-13T19:53:58.933450551Z" level=info msg="StopPodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" returns successfully" Feb 13 19:53:58.933767 containerd[1627]: time="2025-02-13T19:53:58.933744378Z" level=info msg="RemovePodSandbox for \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:58.933806 containerd[1627]: time="2025-02-13T19:53:58.933770338Z" level=info msg="Forcibly stopping sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\"" Feb 13 19:53:58.933926 containerd[1627]: time="2025-02-13T19:53:58.933866975Z" level=info msg="TearDown network for sandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" successfully" Feb 13 19:53:58.938456 containerd[1627]: time="2025-02-13T19:53:58.938413828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.938525 containerd[1627]: time="2025-02-13T19:53:58.938479564Z" level=info msg="RemovePodSandbox \"0d8a15981db724bef2b316cc40fb55f9f4281eabded8d0a3c2dd312e3b3d7479\" returns successfully" Feb 13 19:53:58.938818 containerd[1627]: time="2025-02-13T19:53:58.938791177Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" Feb 13 19:53:58.938906 containerd[1627]: time="2025-02-13T19:53:58.938893825Z" level=info msg="TearDown network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" successfully" Feb 13 19:53:58.938930 containerd[1627]: time="2025-02-13T19:53:58.938907221Z" level=info msg="StopPodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" returns successfully" Feb 13 19:53:58.939265 containerd[1627]: time="2025-02-13T19:53:58.939231798Z" level=info msg="RemovePodSandbox for \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" Feb 13 19:53:58.939318 containerd[1627]: time="2025-02-13T19:53:58.939267266Z" level=info msg="Forcibly stopping sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\"" Feb 13 19:53:58.939381 containerd[1627]: time="2025-02-13T19:53:58.939342221Z" level=info msg="TearDown network for sandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" successfully" Feb 13 19:53:58.943397 containerd[1627]: time="2025-02-13T19:53:58.943361174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.943445 containerd[1627]: time="2025-02-13T19:53:58.943400560Z" level=info msg="RemovePodSandbox \"16de6ffb07e9b337cb8fce9fdd8d88f146171bf1a871d2edc6fa889d62bfcd43\" returns successfully" Feb 13 19:53:58.943751 containerd[1627]: time="2025-02-13T19:53:58.943718564Z" level=info msg="StopPodSandbox for \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\"" Feb 13 19:53:58.943873 containerd[1627]: time="2025-02-13T19:53:58.943846943Z" level=info msg="TearDown network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" successfully" Feb 13 19:53:58.943913 containerd[1627]: time="2025-02-13T19:53:58.943865889Z" level=info msg="StopPodSandbox for \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" returns successfully" Feb 13 19:53:58.944126 containerd[1627]: time="2025-02-13T19:53:58.944100692Z" level=info msg="RemovePodSandbox for \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\"" Feb 13 19:53:58.944126 containerd[1627]: time="2025-02-13T19:53:58.944124167Z" level=info msg="Forcibly stopping sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\"" Feb 13 19:53:58.944295 containerd[1627]: time="2025-02-13T19:53:58.944247867Z" level=info msg="TearDown network for sandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" successfully" Feb 13 19:53:58.948704 containerd[1627]: time="2025-02-13T19:53:58.948661873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:58.948704 containerd[1627]: time="2025-02-13T19:53:58.948716899Z" level=info msg="RemovePodSandbox \"62dad25fca7edeef24adb6306e66b8d67139b56538ddfe0aeeebd22a93913aad\" returns successfully" Feb 13 19:54:00.494748 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:37228.service - OpenSSH per-connection server daemon (10.0.0.1:37228). Feb 13 19:54:00.553122 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 37228 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:00.555963 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:00.560866 systemd-logind[1584]: New session 16 of user core. Feb 13 19:54:00.570825 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:54:00.704593 sshd[5774]: Connection closed by 10.0.0.1 port 37228 Feb 13 19:54:00.704974 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:00.714609 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:37238.service - OpenSSH per-connection server daemon (10.0.0.1:37238). Feb 13 19:54:00.715398 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:37228.service: Deactivated successfully. Feb 13 19:54:00.719506 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:54:00.720646 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:54:00.721815 systemd-logind[1584]: Removed session 16. Feb 13 19:54:00.759716 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 37238 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:00.761635 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:00.766223 systemd-logind[1584]: New session 17 of user core. Feb 13 19:54:00.774615 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:54:00.977243 sshd[5789]: Connection closed by 10.0.0.1 port 37238 Feb 13 19:54:00.977724 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:00.988522 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:37250.service - OpenSSH per-connection server daemon (10.0.0.1:37250). Feb 13 19:54:00.989143 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:37238.service: Deactivated successfully. Feb 13 19:54:00.993439 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:54:00.994528 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:54:00.996036 systemd-logind[1584]: Removed session 17. Feb 13 19:54:01.031861 sshd[5796]: Accepted publickey for core from 10.0.0.1 port 37250 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:01.033943 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:01.038754 systemd-logind[1584]: New session 18 of user core. Feb 13 19:54:01.048656 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:54:02.970929 sshd[5802]: Connection closed by 10.0.0.1 port 37250 Feb 13 19:54:02.971488 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:02.982131 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:37260.service - OpenSSH per-connection server daemon (10.0.0.1:37260). Feb 13 19:54:02.983892 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:37250.service: Deactivated successfully. Feb 13 19:54:02.998028 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:54:03.004971 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:54:03.006667 systemd-logind[1584]: Removed session 18. Feb 13 19:54:03.048299 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 37260 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:03.050519 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:03.061349 systemd-logind[1584]: New session 19 of user core. Feb 13 19:54:03.073581 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:54:03.427162 sshd[5853]: Connection closed by 10.0.0.1 port 37260 Feb 13 19:54:03.427619 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:03.436707 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Feb 13 19:54:03.437535 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:37260.service: Deactivated successfully. Feb 13 19:54:03.440613 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:54:03.442448 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:54:03.443647 systemd-logind[1584]: Removed session 19. Feb 13 19:54:03.481725 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:03.483850 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:03.488902 systemd-logind[1584]: New session 20 of user core. Feb 13 19:54:03.499762 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:54:03.625478 sshd[5866]: Connection closed by 10.0.0.1 port 37268 Feb 13 19:54:03.625901 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:03.630464 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:37268.service: Deactivated successfully. Feb 13 19:54:03.633155 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:54:03.633267 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:54:03.635153 systemd-logind[1584]: Removed session 20. Feb 13 19:54:07.355009 kubelet[2837]: I0213 19:54:07.354959 2837 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:54:08.636428 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:37270.service - OpenSSH per-connection server daemon (10.0.0.1:37270). Feb 13 19:54:08.672391 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 37270 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:08.687969 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:08.691914 systemd-logind[1584]: New session 21 of user core. Feb 13 19:54:08.699473 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:54:08.839246 sshd[5884]: Connection closed by 10.0.0.1 port 37270 Feb 13 19:54:08.839612 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:08.843398 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:37270.service: Deactivated successfully. Feb 13 19:54:08.846144 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:54:08.846312 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:54:08.847380 systemd-logind[1584]: Removed session 21. Feb 13 19:54:13.857523 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:44472.service - OpenSSH per-connection server daemon (10.0.0.1:44472). Feb 13 19:54:13.896341 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 44472 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:13.898361 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:13.904222 systemd-logind[1584]: New session 22 of user core. Feb 13 19:54:13.911110 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:54:14.060420 sshd[5904]: Connection closed by 10.0.0.1 port 44472 Feb 13 19:54:14.060828 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:14.065046 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:44472.service: Deactivated successfully. Feb 13 19:54:14.067409 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:54:14.067467 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:54:14.068871 systemd-logind[1584]: Removed session 22. Feb 13 19:54:19.077488 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:44486.service - OpenSSH per-connection server daemon (10.0.0.1:44486). Feb 13 19:54:19.113596 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 44486 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:19.115328 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:19.119481 systemd-logind[1584]: New session 23 of user core. Feb 13 19:54:19.131590 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:54:19.238464 sshd[5921]: Connection closed by 10.0.0.1 port 44486 Feb 13 19:54:19.238825 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:19.243657 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:44486.service: Deactivated successfully. Feb 13 19:54:19.245961 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:54:19.246085 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:54:19.247560 systemd-logind[1584]: Removed session 23. Feb 13 19:54:23.730242 kubelet[2837]: E0213 19:54:23.730185 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:54:24.252575 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:48456.service - OpenSSH per-connection server daemon (10.0.0.1:48456). Feb 13 19:54:24.304258 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 48456 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:54:24.305969 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:54:24.310233 systemd-logind[1584]: New session 24 of user core. Feb 13 19:54:24.318595 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:54:24.464760 sshd[5945]: Connection closed by 10.0.0.1 port 48456 Feb 13 19:54:24.465090 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Feb 13 19:54:24.468105 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:48456.service: Deactivated successfully. Feb 13 19:54:24.471616 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:54:24.472870 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:54:24.475480 systemd-logind[1584]: Removed session 24. Feb 13 19:54:24.730862 kubelet[2837]: E0213 19:54:24.730780 2837 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"