Feb 13 15:24:26.908437 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:24:26.908462 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:26.908476 kernel: BIOS-provided physical RAM map: Feb 13 15:24:26.908482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:24:26.908489 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:24:26.908497 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:24:26.908506 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:24:26.908515 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:24:26.908521 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:24:26.908528 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:24:26.908540 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:24:26.908548 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:24:26.908555 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:24:26.908562 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:24:26.908571 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:24:26.908580 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:24:26.908592 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:24:26.909275 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:24:26.909284 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:24:26.909292 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:24:26.909299 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:24:26.909307 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:24:26.909316 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:24:26.909324 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:24:26.909333 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:24:26.909339 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:24:26.909348 kernel: NX (Execute Disable) protection: active Feb 13 15:24:26.909361 kernel: APIC: Static calls initialized Feb 13 15:24:26.909370 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:24:26.909377 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:24:26.909385 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:24:26.909393 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:24:26.909401 kernel: extended physical RAM map: Feb 13 15:24:26.909410 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:24:26.909417 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:24:26.909424 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:24:26.909433 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:24:26.909442 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:24:26.909453 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:24:26.909460 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:24:26.909473 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:24:26.909483 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:24:26.909491 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:24:26.909498 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:24:26.909507 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:24:26.909519 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:24:26.909529 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:24:26.909536 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:24:26.909544 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:24:26.909554 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:24:26.909563 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:24:26.909572 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:24:26.909579 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:24:26.909588 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:24:26.909618 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:24:26.909628 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:24:26.909637 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:24:26.909647 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:24:26.909655 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:24:26.909663 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:24:26.909672 kernel: efi: EFI v2.7 by EDK II Feb 13 15:24:26.909682 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:24:26.909691 kernel: random: crng init done Feb 13 15:24:26.909698 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:24:26.909707 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:24:26.909720 kernel: secureboot: Secure boot disabled Feb 13 15:24:26.909729 kernel: SMBIOS 2.8 present. Feb 13 15:24:26.909736 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:24:26.909745 kernel: Hypervisor detected: KVM Feb 13 15:24:26.909754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:24:26.909763 kernel: kvm-clock: using sched offset of 2595060638 cycles Feb 13 15:24:26.909772 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:24:26.909781 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:24:26.909798 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:24:26.909808 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:24:26.909817 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:24:26.909827 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:24:26.909837 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:24:26.909846 kernel: Using GB pages for direct mapping Feb 13 15:24:26.909855 kernel: ACPI: Early table checksum verification disabled Feb 13 15:24:26.909863 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:24:26.909871 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:24:26.909880 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909890 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909899 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:24:26.909909 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909919 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909928 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909938 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:26.909945 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:24:26.909953 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:24:26.909963 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:24:26.909973 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:24:26.909984 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:24:26.909992 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:24:26.910001 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:24:26.910011 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:24:26.910020 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:24:26.910027 kernel: No NUMA configuration found Feb 13 15:24:26.910034 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:24:26.910042 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:24:26.910052 kernel: Zone ranges: Feb 13 15:24:26.910062 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:24:26.910073 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:24:26.910080 kernel: Normal empty Feb 13 15:24:26.910090 kernel: Movable zone start for each node Feb 13 15:24:26.910099 kernel: Early memory node ranges Feb 13 15:24:26.910108 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:24:26.910116 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:24:26.910125 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:24:26.910134 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:24:26.910143 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:24:26.910154 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:24:26.910163 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:24:26.910172 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:24:26.910182 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:24:26.910190 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:24:26.910198 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:24:26.910217 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:24:26.910229 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:24:26.910237 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:24:26.910246 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:24:26.910256 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:24:26.910266 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:24:26.910276 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:24:26.910286 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:24:26.910296 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:24:26.910306 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:24:26.910313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:24:26.910326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:24:26.910336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:24:26.910345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:24:26.910353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:24:26.910363 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:24:26.910373 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:24:26.910382 kernel: TSC deadline timer available Feb 13 15:24:26.910390 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:24:26.910398 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:24:26.910410 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:24:26.910421 kernel: kvm-guest: setup PV sched yield Feb 13 15:24:26.910429 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:24:26.910437 kernel: Booting paravirtualized kernel on KVM Feb 13 15:24:26.910446 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:24:26.910456 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:24:26.910467 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:24:26.910475 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:24:26.910484 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:24:26.910496 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:24:26.910506 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:24:26.910515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:26.910525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:24:26.910535 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:24:26.910545 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:24:26.910553 kernel: Fallback order for Node 0: 0 Feb 13 15:24:26.910562 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:24:26.910573 kernel: Policy zone: DMA32 Feb 13 15:24:26.910586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:24:26.910619 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:24:26.910630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:24:26.910638 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:24:26.910648 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:24:26.910658 kernel: Dynamic Preempt: voluntary Feb 13 15:24:26.910667 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:24:26.910676 kernel: rcu: RCU event tracing is enabled. Feb 13 15:24:26.910686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:24:26.910700 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:24:26.910709 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:24:26.910717 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:24:26.910726 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:24:26.910736 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:24:26.910746 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:24:26.910754 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:24:26.910763 kernel: Console: colour dummy device 80x25 Feb 13 15:24:26.910773 kernel: printk: console [ttyS0] enabled Feb 13 15:24:26.910786 kernel: ACPI: Core revision 20230628 Feb 13 15:24:26.910802 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:24:26.910812 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:24:26.910821 kernel: x2apic enabled Feb 13 15:24:26.910831 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:24:26.910844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:24:26.912073 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:24:26.912089 kernel: kvm-guest: setup PV IPIs Feb 13 15:24:26.912098 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:24:26.912112 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:24:26.912123 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:24:26.912130 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:24:26.912140 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:24:26.912150 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:24:26.912160 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:24:26.912169 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:24:26.912178 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:24:26.912188 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:24:26.912202 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:24:26.912209 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:24:26.912219 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:24:26.912229 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:24:26.912239 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:24:26.912254 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:24:26.912264 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:24:26.912290 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:24:26.912320 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:24:26.912328 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:24:26.912347 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:24:26.912361 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:24:26.912369 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:24:26.912397 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:24:26.912407 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:24:26.912418 kernel: landlock: Up and running. Feb 13 15:24:26.912429 kernel: SELinux: Initializing. Feb 13 15:24:26.912442 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:26.912450 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:26.912460 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:24:26.912470 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:26.912480 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:26.912488 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:26.912497 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:24:26.912507 kernel: ... version: 0 Feb 13 15:24:26.912517 kernel: ... bit width: 48 Feb 13 15:24:26.912528 kernel: ... generic registers: 6 Feb 13 15:24:26.912537 kernel: ... value mask: 0000ffffffffffff Feb 13 15:24:26.912548 kernel: ... max period: 00007fffffffffff Feb 13 15:24:26.912558 kernel: ... fixed-purpose events: 0 Feb 13 15:24:26.912567 kernel: ... event mask: 000000000000003f Feb 13 15:24:26.912575 kernel: signal: max sigframe size: 1776 Feb 13 15:24:26.912585 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:24:26.912609 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:24:26.912618 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:24:26.912632 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:24:26.912642 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:24:26.912649 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:24:26.912659 kernel: smpboot: Max logical packages: 1 Feb 13 15:24:26.912669 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:24:26.912679 kernel: devtmpfs: initialized Feb 13 15:24:26.912687 kernel: x86/mm: Memory block size: 128MB Feb 13 15:24:26.912696 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:24:26.912706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:24:26.912719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:24:26.912727 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:24:26.912736 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:24:26.912746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:24:26.912756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:24:26.912765 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:24:26.912773 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:24:26.912783 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:24:26.912803 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:24:26.912815 kernel: audit: type=2000 audit(1739460267.167:1): state=initialized audit_enabled=0 res=1 Feb 13 15:24:26.912823 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:24:26.912833 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:24:26.912843 kernel: cpuidle: using governor menu Feb 13 15:24:26.912853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:24:26.912861 kernel: dca service started, version 1.12.1 Feb 13 15:24:26.912870 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:24:26.912880 kernel: PCI: Using configuration type 1 for base access Feb 13 15:24:26.912890 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:24:26.912901 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:24:26.912910 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:24:26.912920 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:24:26.912930 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:24:26.912939 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:24:26.912947 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:24:26.912957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:24:26.912967 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:24:26.912976 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:24:26.912987 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:24:26.912997 kernel: ACPI: Interpreter enabled Feb 13 15:24:26.913007 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:24:26.913016 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:24:26.913024 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:24:26.913034 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:24:26.913044 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:24:26.913053 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:24:26.913254 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:24:26.913416 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:24:26.913565 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:24:26.913577 kernel: PCI host bridge to bus 0000:00 Feb 13 15:24:26.913746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:24:26.913894 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:24:26.915372 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:24:26.915511 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:24:26.915659 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:24:26.915780 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:24:26.915903 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:24:26.916041 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:24:26.916173 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:24:26.916294 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:24:26.916417 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:24:26.916538 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:24:26.917838 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:24:26.918084 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:24:26.918357 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:24:26.918571 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:24:26.918740 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:24:26.918876 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:24:26.919010 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:24:26.919134 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:24:26.919253 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:24:26.919374 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:24:26.919502 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:24:26.919647 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:24:26.919770 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:24:26.919900 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:24:26.920020 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:24:26.920146 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:24:26.920266 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:24:26.920437 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:24:26.920639 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:24:26.920789 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:24:26.920969 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:24:26.921119 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:24:26.921134 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:24:26.921145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:24:26.921153 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:24:26.921168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:24:26.921179 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:24:26.921188 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:24:26.921196 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:24:26.921206 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:24:26.921216 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:24:26.921226 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:24:26.921233 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:24:26.921244 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:24:26.921256 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:24:26.921266 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:24:26.921275 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:24:26.921284 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:24:26.921295 kernel: iommu: Default domain type: Translated Feb 13 15:24:26.921305 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:24:26.921313 kernel: efivars: Registered efivars operations Feb 13 15:24:26.921323 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:24:26.921333 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:24:26.921346 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:24:26.921354 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:24:26.921364 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:24:26.921374 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:24:26.921384 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:24:26.921392 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:24:26.921402 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:24:26.921412 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:24:26.921560 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:24:26.921740 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:24:26.921898 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:24:26.921912 kernel: vgaarb: loaded Feb 13 15:24:26.921921 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:24:26.921931 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:24:26.921941 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:24:26.921950 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:24:26.921959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:24:26.921974 kernel: pnp: PnP ACPI init Feb 13 15:24:26.922128 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:24:26.922144 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:24:26.922152 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:24:26.922163 kernel: NET: Registered PF_INET protocol family Feb 13 15:24:26.922236 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:24:26.922251 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:24:26.922261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:24:26.922273 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:24:26.922283 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:24:26.922294 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:24:26.922303 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:26.922312 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:26.922323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:24:26.922333 kernel: NET: Registered PF_XDP protocol family Feb 13 15:24:26.922492 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:24:26.922674 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:24:26.922841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:24:26.922979 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:24:26.923111 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:24:26.923246 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:24:26.923380 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:24:26.923515 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:24:26.923530 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:24:26.923541 kernel: Initialise system trusted keyrings Feb 13 15:24:26.923555 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:24:26.923566 kernel: Key type asymmetric registered Feb 13 15:24:26.923576 kernel: Asymmetric key parser 'x509' registered Feb 13 15:24:26.923586 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:24:26.923638 kernel: io scheduler mq-deadline registered Feb 13 15:24:26.923650 kernel: io scheduler kyber registered Feb 13 15:24:26.923660 kernel: io scheduler bfq registered Feb 13 15:24:26.923669 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:24:26.923679 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:24:26.923695 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:24:26.923707 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:24:26.923716 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:24:26.923727 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:24:26.923738 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:24:26.923747 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:24:26.923760 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:24:26.923930 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:24:26.924071 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:24:26.924085 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:24:26.924221 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:24:26 UTC (1739460266) Feb 13 15:24:26.924345 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:24:26.924356 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:24:26.924368 kernel: efifb: probing for efifb Feb 13 15:24:26.924377 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:24:26.924385 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:24:26.924393 kernel: efifb: scrolling: redraw Feb 13 15:24:26.924401 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:24:26.924409 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:24:26.924418 kernel: fb0: EFI VGA frame buffer device Feb 13 15:24:26.924428 kernel: pstore: Using crash dump compression: deflate Feb 13 15:24:26.924436 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:24:26.924445 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:24:26.924455 kernel: Segment Routing with IPv6 Feb 13 15:24:26.924463 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:24:26.924471 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:24:26.924479 kernel: Key type dns_resolver registered Feb 13 15:24:26.924487 kernel: IPI shorthand broadcast: enabled Feb 13 15:24:26.924496 kernel: sched_clock: Marking stable (594002713, 150221150)->(759569795, -15345932) Feb 13 15:24:26.924504 kernel: registered taskstats version 1 Feb 13 15:24:26.924512 kernel: Loading compiled-in X.509 certificates Feb 13 15:24:26.924521 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:24:26.924533 kernel: Key type .fscrypt registered Feb 13 15:24:26.924541 kernel: Key type fscrypt-provisioning registered Feb 13 15:24:26.924549 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:24:26.924557 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:24:26.924565 kernel: ima: No architecture policies found Feb 13 15:24:26.924574 kernel: clk: Disabling unused clocks Feb 13 15:24:26.924582 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:24:26.924590 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:24:26.924701 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:24:26.924714 kernel: Run /init as init process Feb 13 15:24:26.924721 kernel: with arguments: Feb 13 15:24:26.924736 kernel: /init Feb 13 15:24:26.924745 kernel: with environment: Feb 13 15:24:26.924752 kernel: HOME=/ Feb 13 15:24:26.925841 kernel: TERM=linux Feb 13 15:24:26.925851 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:24:26.925861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:24:26.925879 systemd[1]: Detected virtualization kvm. Feb 13 15:24:26.925887 systemd[1]: Detected architecture x86-64. Feb 13 15:24:26.925896 systemd[1]: Running in initrd. Feb 13 15:24:26.925904 systemd[1]: No hostname configured, using default hostname. Feb 13 15:24:26.925912 systemd[1]: Hostname set to . Feb 13 15:24:26.925920 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:26.925929 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:24:26.925940 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:26.925950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:26.925960 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:24:26.925968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:26.925977 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:24:26.926068 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:24:26.926079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:24:26.926090 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:24:26.926660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:26.926672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:26.926681 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:26.926690 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:26.926699 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:26.926707 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:26.926716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:26.926724 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:26.926736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:24:26.926745 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:24:26.926754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:26.926762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:26.926771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:26.926780 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:26.926788 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:24:26.926805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:26.926816 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:24:26.926825 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:24:26.926833 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:26.926842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:26.926851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:26.926859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:26.926868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:26.926876 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:24:26.926888 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:26.926917 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:24:26.926939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:26.926948 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:26.926957 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:26.926966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:26.926974 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:24:26.926983 kernel: Bridge firewalling registered Feb 13 15:24:26.926991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:26.927001 systemd-journald[194]: Journal started Feb 13 15:24:26.927019 systemd-journald[194]: Runtime Journal (/run/log/journal/e8f0064290944e3cbc710c34eaaac495) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:24:26.893480 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:24:26.936561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:26.922850 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:24:26.939404 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:26.939703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:26.941309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:26.943828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:26.958826 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:24:26.961463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:26.969174 dracut-cmdline[224]: dracut-dracut-053 Feb 13 15:24:26.971817 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:26.974530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:26.984734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:27.013856 systemd-resolved[247]: Positive Trust Anchors: Feb 13 15:24:27.013871 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:27.013902 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:27.016371 systemd-resolved[247]: Defaulting to hostname 'linux'. Feb 13 15:24:27.017392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:27.024625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:27.056628 kernel: SCSI subsystem initialized Feb 13 15:24:27.066621 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:24:27.077637 kernel: iscsi: registered transport (tcp) Feb 13 15:24:27.097909 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:24:27.097949 kernel: QLogic iSCSI HBA Driver Feb 13 15:24:27.150047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:27.158707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:24:27.184102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:24:27.184179 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:24:27.184213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:24:27.226645 kernel: raid6: avx2x4 gen() 29153 MB/s Feb 13 15:24:27.243648 kernel: raid6: avx2x2 gen() 25650 MB/s Feb 13 15:24:27.260882 kernel: raid6: avx2x1 gen() 21083 MB/s Feb 13 15:24:27.260924 kernel: raid6: using algorithm avx2x4 gen() 29153 MB/s Feb 13 15:24:27.278891 kernel: raid6: .... xor() 7522 MB/s, rmw enabled Feb 13 15:24:27.278929 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:24:27.300634 kernel: xor: automatically using best checksumming function avx Feb 13 15:24:27.455651 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:24:27.470243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:27.479733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:27.493258 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 15:24:27.498856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:27.503792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:24:27.522974 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Feb 13 15:24:27.555868 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:27.566750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:27.632657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:27.639782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:24:27.650619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:27.653878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:27.656593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:27.658993 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:27.666629 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:24:27.690832 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:24:27.690985 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:24:27.690997 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:24:27.691008 kernel: GPT:9289727 != 19775487 Feb 13 15:24:27.691019 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:24:27.691030 kernel: GPT:9289727 != 19775487 Feb 13 15:24:27.691040 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:24:27.691050 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:27.668021 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:24:27.682023 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:27.689469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:27.689579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:27.691235 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:27.692373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:27.692570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:27.692687 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:27.705106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:27.716194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:27.717492 kernel: libata version 3.00 loaded. Feb 13 15:24:27.717424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:27.721034 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:24:27.721051 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Feb 13 15:24:27.723531 kernel: AES CTR mode by8 optimization enabled Feb 13 15:24:27.725614 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Feb 13 15:24:27.728716 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:24:27.748433 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:24:27.748454 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:24:27.748622 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:24:27.748765 kernel: scsi host0: ahci Feb 13 15:24:27.748925 kernel: scsi host1: ahci Feb 13 15:24:27.749102 kernel: scsi host2: ahci Feb 13 15:24:27.749252 kernel: scsi host3: ahci Feb 13 15:24:27.749405 kernel: scsi host4: ahci Feb 13 15:24:27.749549 kernel: scsi host5: ahci Feb 13 15:24:27.749743 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:24:27.749755 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:24:27.749766 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:24:27.749785 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:24:27.749796 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:24:27.749807 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:24:27.732178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:27.749869 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:24:27.753971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:27.758459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:24:27.766971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:24:27.770514 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:24:27.770583 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:24:27.783725 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:24:27.784438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:27.797653 disk-uuid[560]: Primary Header is updated. Feb 13 15:24:27.797653 disk-uuid[560]: Secondary Entries is updated. Feb 13 15:24:27.797653 disk-uuid[560]: Secondary Header is updated. Feb 13 15:24:27.801621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:27.806631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:27.808364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:28.059447 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:24:28.059488 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:28.059500 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:28.059627 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:28.060640 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:24:28.061622 kernel: ata3.00: applying bridge limits Feb 13 15:24:28.061637 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:28.062626 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:28.063622 kernel: ata3.00: configured for UDMA/100 Feb 13 15:24:28.064643 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:24:28.112173 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:24:28.128289 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:24:28.128308 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:24:28.807644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:28.808248 disk-uuid[564]: The operation has completed successfully. Feb 13 15:24:28.837786 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:24:28.837906 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:24:28.865863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:24:28.869311 sh[596]: Success Feb 13 15:24:28.881652 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:24:28.917532 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:24:28.930137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:24:28.933407 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:24:28.945538 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:24:28.945573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:28.945587 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:24:28.945615 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:24:28.946269 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:24:28.951496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:24:28.954231 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:24:28.965774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:24:28.968693 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:24:28.977447 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:28.977478 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:28.977492 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:28.981634 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:28.991086 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:24:28.992625 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:29.003206 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:24:29.008810 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:24:29.061819 ignition[692]: Ignition 2.20.0 Feb 13 15:24:29.061835 ignition[692]: Stage: fetch-offline Feb 13 15:24:29.061882 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:29.061895 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:29.062018 ignition[692]: parsed url from cmdline: "" Feb 13 15:24:29.062022 ignition[692]: no config URL provided Feb 13 15:24:29.062028 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:24:29.062039 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:24:29.062070 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 15:24:29.062075 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:24:29.072103 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 15:24:29.088075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:29.099755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:29.115835 ignition[692]: parsing config with SHA512: 013e3f44e38511e2d125c0b3c565b250f848737c652ab500bb79b657fc99ab1121367ad0d18311d3ba86bad7d54a6fa4b44920873fcbcaaa1b6c906aaaced5c8 Feb 13 15:24:29.120658 unknown[692]: fetched base config from "system" Feb 13 15:24:29.120680 unknown[692]: fetched user config from "qemu" Feb 13 15:24:29.121500 ignition[692]: fetch-offline: fetch-offline passed Feb 13 15:24:29.121633 ignition[692]: Ignition finished successfully Feb 13 15:24:29.122905 systemd-networkd[785]: lo: Link UP Feb 13 15:24:29.122910 systemd-networkd[785]: lo: Gained carrier Feb 13 15:24:29.123895 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:29.124569 systemd-networkd[785]: Enumeration completed Feb 13 15:24:29.125112 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:29.125116 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:29.126369 systemd-networkd[785]: eth0: Link UP Feb 13 15:24:29.126374 systemd-networkd[785]: eth0: Gained carrier Feb 13 15:24:29.126382 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:29.126570 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:29.128895 systemd[1]: Reached target network.target - Network. Feb 13 15:24:29.130358 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:24:29.137667 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:29.140718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:24:29.154497 ignition[789]: Ignition 2.20.0 Feb 13 15:24:29.154509 ignition[789]: Stage: kargs Feb 13 15:24:29.154714 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:29.154726 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:29.155542 ignition[789]: kargs: kargs passed Feb 13 15:24:29.155588 ignition[789]: Ignition finished successfully Feb 13 15:24:29.158934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:24:29.167812 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:24:29.178749 ignition[799]: Ignition 2.20.0 Feb 13 15:24:29.178760 ignition[799]: Stage: disks Feb 13 15:24:29.178930 ignition[799]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:29.178941 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:29.179723 ignition[799]: disks: disks passed Feb 13 15:24:29.182090 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:24:29.179771 ignition[799]: Ignition finished successfully Feb 13 15:24:29.183353 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:29.184873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:24:29.187013 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:29.188043 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:29.189775 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:29.199743 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:24:29.213285 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:24:29.220002 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:24:29.231684 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:24:29.317614 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:24:29.318029 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:24:29.319557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:24:29.327695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:29.329849 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:24:29.332661 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:24:29.336846 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Feb 13 15:24:29.332714 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:24:29.344011 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:29.344035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:29.344047 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:29.344058 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:29.332750 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:29.338186 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:24:29.345077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:29.348017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:24:29.384378 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:24:29.389704 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:24:29.394470 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:24:29.399337 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:24:29.482380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:29.494687 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:24:29.497454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:24:29.505623 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:29.521815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:24:29.558606 ignition[935]: INFO : Ignition 2.20.0 Feb 13 15:24:29.558606 ignition[935]: INFO : Stage: mount Feb 13 15:24:29.560388 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:29.560388 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:29.563676 ignition[935]: INFO : mount: mount passed Feb 13 15:24:29.564518 ignition[935]: INFO : Ignition finished successfully Feb 13 15:24:29.567635 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:24:29.575936 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:24:29.943782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:24:29.956754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:29.964079 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Feb 13 15:24:29.964114 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:29.964129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:29.965611 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:29.968621 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:29.969383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:29.997619 ignition[961]: INFO : Ignition 2.20.0 Feb 13 15:24:29.997619 ignition[961]: INFO : Stage: files Feb 13 15:24:29.999525 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:29.999525 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:29.999525 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:24:29.999525 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:24:29.999525 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:24:30.006991 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:24:30.006991 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:24:30.006991 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:24:30.006991 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:24:30.006991 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:24:30.006991 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:24:30.006991 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:24:30.001860 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 15:24:30.044671 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:24:30.359653 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:24:30.359653 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:24:30.363583 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:24:30.539799 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 15:24:30.886756 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:24:31.250347 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:24:31.250347 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 15:24:31.254104 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:24:31.281676 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:24:31.286228 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:24:31.287851 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:24:31.287851 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:24:31.287851 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:24:31.287851 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:31.287851 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:31.287851 ignition[961]: INFO : files: files passed Feb 13 15:24:31.287851 ignition[961]: INFO : Ignition finished successfully Feb 13 15:24:31.289417 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:24:31.297747 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:24:31.300098 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:24:31.302115 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:24:31.302218 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:24:31.310108 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:24:31.312902 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:31.312902 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:31.316407 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:31.318998 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:31.321737 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:24:31.329732 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:24:31.352451 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:24:31.352572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:24:31.353762 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:24:31.355908 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:24:31.357993 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:24:31.361416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:24:31.381409 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:31.390742 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:24:31.400148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:31.400288 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:31.402484 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:24:31.404701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:24:31.404812 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:31.409454 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:24:31.409580 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:24:31.411573 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:24:31.414261 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:31.416435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:31.418634 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:24:31.419717 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:31.420041 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:24:31.423832 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:24:31.425742 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:24:31.426043 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:24:31.426148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:31.431956 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:31.432083 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:31.434163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:24:31.436342 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:31.437307 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:24:31.437410 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:31.441736 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:24:31.441845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:31.442839 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:24:31.444870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:24:31.450670 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:31.450812 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:24:31.454255 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:24:31.455137 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:24:31.455227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:31.456876 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:24:31.456959 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:31.458613 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:24:31.458734 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:31.460517 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:24:31.460632 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:24:31.472727 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:24:31.473469 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:24:31.473826 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:24:31.473929 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:31.474229 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:24:31.474322 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:31.478061 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:24:31.478173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:24:31.502038 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:24:31.514006 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 15:24:31.514006 ignition[1016]: INFO : Stage: umount Feb 13 15:24:31.515841 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:31.515841 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:31.518568 ignition[1016]: INFO : umount: umount passed Feb 13 15:24:31.519407 ignition[1016]: INFO : Ignition finished successfully Feb 13 15:24:31.522391 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:24:31.522527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:24:31.524705 systemd[1]: Stopped target network.target - Network. Feb 13 15:24:31.526384 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:24:31.526437 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:24:31.528314 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:24:31.528359 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:24:31.530247 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:24:31.530291 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:24:31.532232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:24:31.532277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:31.534495 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:24:31.536645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:24:31.539669 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 15:24:31.542617 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:24:31.542764 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:24:31.544143 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:24:31.544182 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:31.553728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:24:31.555744 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:24:31.555814 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:31.557193 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:31.560010 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:24:31.560143 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:24:31.573972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:24:31.574046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:31.575219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:24:31.575269 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:31.577389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:24:31.577453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:31.579923 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:24:31.580091 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:31.582131 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:24:31.582240 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:24:31.584963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:24:31.585017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:31.587276 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:24:31.587319 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:31.589184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:24:31.589232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:31.591540 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:24:31.591588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:31.593511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:31.593560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:31.605739 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:24:31.605816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:24:31.605868 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:31.606248 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:24:31.606291 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:31.606630 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:24:31.606672 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:31.606957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:31.606998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:31.612830 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:24:31.612957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:24:31.662667 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:24:31.662810 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:24:31.664876 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:24:31.666591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:24:31.666658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:31.677735 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:24:31.684929 systemd[1]: Switching root. Feb 13 15:24:31.721985 systemd-journald[194]: Journal stopped Feb 13 15:24:32.817151 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:24:32.817234 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:24:32.817253 kernel: SELinux: policy capability open_perms=1 Feb 13 15:24:32.817269 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:24:32.817285 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:24:32.817309 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:24:32.817330 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:24:32.817346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:24:32.817362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:24:32.817378 kernel: audit: type=1403 audit(1739460272.096:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:24:32.817398 systemd[1]: Successfully loaded SELinux policy in 39.158ms. Feb 13 15:24:32.817430 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.201ms. Feb 13 15:24:32.817449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:24:32.817465 systemd[1]: Detected virtualization kvm. Feb 13 15:24:32.817482 systemd[1]: Detected architecture x86-64. Feb 13 15:24:32.817498 systemd[1]: Detected first boot. Feb 13 15:24:32.817521 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:32.817540 zram_generator::config[1077]: No configuration found. Feb 13 15:24:32.817566 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:24:32.817583 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:24:32.817611 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:24:32.817630 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:24:32.817657 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:24:32.817674 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:24:32.817691 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:24:32.817708 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:24:32.817726 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:24:32.817747 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:24:32.817765 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:24:32.817782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:32.817799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:32.817816 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:24:32.817833 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:24:32.817850 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:24:32.817867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:32.817884 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:24:32.817904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:32.817921 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:24:32.817938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:32.817957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:32.817974 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:32.817991 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:32.818008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:24:32.818025 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:24:32.818045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:24:32.818062 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:24:32.818079 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:32.818096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:32.818113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:32.818130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:24:32.818147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:24:32.818165 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:24:32.818181 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:24:32.818202 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:32.818219 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:24:32.818236 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:24:32.818253 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:24:32.818270 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:24:32.818287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:32.818304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:32.818323 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:24:32.818343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:32.818360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:32.818383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:32.818400 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:24:32.818416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:32.818434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:24:32.818451 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 15:24:32.818469 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 15:24:32.818489 kernel: fuse: init (API version 7.39) Feb 13 15:24:32.818504 kernel: loop: module loaded Feb 13 15:24:32.818520 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:32.818537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:32.818555 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:24:32.818592 systemd-journald[1166]: Collecting audit messages is disabled. Feb 13 15:24:32.818657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:24:32.818675 systemd-journald[1166]: Journal started Feb 13 15:24:32.818710 systemd-journald[1166]: Runtime Journal (/run/log/journal/e8f0064290944e3cbc710c34eaaac495) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:24:32.826985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:32.827041 kernel: ACPI: bus type drm_connector registered Feb 13 15:24:32.827059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:32.832712 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:32.834163 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:24:32.835505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:24:32.836948 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:24:32.838239 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:24:32.839460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:24:32.840804 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:24:32.842323 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:24:32.844016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:32.845690 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:24:32.845985 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:24:32.847563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:32.847877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:32.849403 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:32.849712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:32.851166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:32.851447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:32.853068 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:24:32.853345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:24:32.854852 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:32.855165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:32.856777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:32.858396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:24:32.860149 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:24:32.874900 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:24:32.881681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:24:32.884663 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:24:32.886005 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:24:32.888787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:24:32.892758 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:24:32.894466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:32.896486 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:24:32.898054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:32.901221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:32.906766 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:32.911728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:24:32.913482 systemd-journald[1166]: Time spent on flushing to /var/log/journal/e8f0064290944e3cbc710c34eaaac495 is 26.103ms for 1033 entries. Feb 13 15:24:32.913482 systemd-journald[1166]: System Journal (/var/log/journal/e8f0064290944e3cbc710c34eaaac495) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:24:32.965015 systemd-journald[1166]: Received client request to flush runtime journal. Feb 13 15:24:32.915262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:24:32.923831 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:24:32.928240 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:24:32.938950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:32.942173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:32.949957 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 15:24:32.949975 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 15:24:32.952828 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:24:32.958623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:32.962434 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:24:32.967610 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:24:32.970517 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:24:32.992532 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:24:33.011777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:33.031185 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Feb 13 15:24:33.031211 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Feb 13 15:24:33.038502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:33.472328 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:24:33.484765 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:33.508224 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Feb 13 15:24:33.524021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:33.537898 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:33.546263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:24:33.569356 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 15:24:33.574717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1246) Feb 13 15:24:33.616624 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:24:33.627903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:24:33.629920 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:24:33.640860 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:24:33.652958 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:24:33.653236 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:24:33.653399 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:24:33.654129 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:24:33.655720 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 15:24:33.681634 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:24:33.710521 systemd-networkd[1250]: lo: Link UP Feb 13 15:24:33.710532 systemd-networkd[1250]: lo: Gained carrier Feb 13 15:24:33.712683 systemd-networkd[1250]: Enumeration completed Feb 13 15:24:33.713078 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:33.713083 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:33.714054 systemd-networkd[1250]: eth0: Link UP Feb 13 15:24:33.714108 systemd-networkd[1250]: eth0: Gained carrier Feb 13 15:24:33.714153 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:33.714654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:33.716222 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:33.721850 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:24:33.765135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:33.765942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:33.771477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:33.782912 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:33.789647 kernel: kvm_amd: TSC scaling supported Feb 13 15:24:33.789720 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:24:33.789739 kernel: kvm_amd: Nested Paging enabled Feb 13 15:24:33.789760 kernel: kvm_amd: LBR virtualization supported Feb 13 15:24:33.789779 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:24:33.789962 kernel: kvm_amd: Virtual GIF supported Feb 13 15:24:33.808793 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:24:33.830497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:33.845852 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:24:33.869737 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:24:33.878455 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:33.916174 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:24:33.917778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:33.929776 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:24:33.936191 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:33.970162 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:24:33.971781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:24:33.973090 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:24:33.973121 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:33.974204 systemd[1]: Reached target machines.target - Containers. Feb 13 15:24:33.976296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:24:33.988825 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:24:33.992351 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:24:33.993577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:33.994832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:24:33.997943 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:24:34.001249 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:24:34.003473 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:24:34.013712 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:24:34.019647 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:24:34.032391 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:24:34.034113 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:24:34.041636 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:24:34.067643 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:24:34.096627 kernel: loop2: detected capacity change from 0 to 211296 Feb 13 15:24:34.124637 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 15:24:34.132630 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:24:34.141644 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:24:34.146579 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:24:34.147281 (sd-merge)[1319]: Merged extensions into '/usr'. Feb 13 15:24:34.151832 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:24:34.151847 systemd[1]: Reloading... Feb 13 15:24:34.200682 zram_generator::config[1348]: No configuration found. Feb 13 15:24:34.266779 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:24:34.331639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:34.395235 systemd[1]: Reloading finished in 242 ms. Feb 13 15:24:34.412804 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:24:34.414331 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:24:34.425998 systemd[1]: Starting ensure-sysext.service... Feb 13 15:24:34.428319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:34.432192 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:24:34.432207 systemd[1]: Reloading... Feb 13 15:24:34.461587 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:24:34.462393 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:24:34.463430 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:24:34.463749 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Feb 13 15:24:34.463823 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Feb 13 15:24:34.468705 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:34.468719 systemd-tmpfiles[1392]: Skipping /boot Feb 13 15:24:34.485834 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:34.485925 systemd-tmpfiles[1392]: Skipping /boot Feb 13 15:24:34.486641 zram_generator::config[1421]: No configuration found. Feb 13 15:24:34.595645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:34.659866 systemd[1]: Reloading finished in 227 ms. Feb 13 15:24:34.678583 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:34.693954 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:34.696668 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:24:34.700738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:24:34.704002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:34.707507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:24:34.716433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.717210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:34.718791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:34.722194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:34.729440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:34.730529 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:34.730679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.731681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:34.731909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:34.733497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:34.734446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:34.741335 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:24:34.745033 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:34.746053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:34.755377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:24:34.757811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.758140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:34.765970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:34.769755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:34.771499 augenrules[1506]: No rules Feb 13 15:24:34.774059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:34.775378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:34.779484 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:24:34.780657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.784759 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:34.785753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:34.787851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:34.788068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:34.789782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:34.789986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:34.791868 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:34.792100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:34.793200 systemd-resolved[1469]: Positive Trust Anchors: Feb 13 15:24:34.793479 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:34.793550 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:34.798232 systemd-resolved[1469]: Defaulting to hostname 'linux'. Feb 13 15:24:34.799725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.814012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:34.815068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:34.816697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:34.818841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:34.822843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:34.828216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:34.830699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:34.830824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:34.831532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:34.832762 augenrules[1523]: /sbin/augenrules: No change Feb 13 15:24:34.834340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:24:34.836420 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:24:34.838007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:34.838227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:34.840057 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:34.840268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:34.840589 augenrules[1549]: No rules Feb 13 15:24:34.842345 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:34.842669 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:34.844165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:34.844376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:34.846275 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:34.846518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:34.851984 systemd[1]: Finished ensure-sysext.service. Feb 13 15:24:34.857702 systemd[1]: Reached target network.target - Network. Feb 13 15:24:34.858991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:34.860196 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:34.860255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:34.873740 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:24:34.874875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:24:34.937156 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:24:34.938872 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:34.940304 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:24:34.941918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:24:34.943614 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:24:36.233775 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:24:36.233801 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:36.233808 systemd-resolved[1469]: Clock change detected. Flushing caches. Feb 13 15:24:36.234981 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:24:36.235012 systemd-timesyncd[1565]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:24:36.236352 systemd-timesyncd[1565]: Initial clock synchronization to Thu 2025-02-13 15:24:36.233758 UTC. Feb 13 15:24:36.236460 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:24:36.237684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:24:36.238921 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:36.240520 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:24:36.243891 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:24:36.246338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:24:36.259408 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:24:36.260625 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:36.261649 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:36.262790 systemd[1]: System is tainted: cgroupsv1 Feb 13 15:24:36.262832 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:36.262856 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:36.264515 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:24:36.267010 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:24:36.269403 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:24:36.274249 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:24:36.275334 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:24:36.277276 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:24:36.278500 jq[1571]: false Feb 13 15:24:36.283358 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:24:36.287691 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:24:36.291396 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:24:36.298045 extend-filesystems[1573]: Found loop3 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found loop4 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found loop5 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found sr0 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda1 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda2 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda3 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found usr Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda4 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda6 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda7 Feb 13 15:24:36.298045 extend-filesystems[1573]: Found vda9 Feb 13 15:24:36.298045 extend-filesystems[1573]: Checking size of /dev/vda9 Feb 13 15:24:36.321531 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:24:36.321561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1248) Feb 13 15:24:36.309112 dbus-daemon[1570]: [system] SELinux support is enabled Feb 13 15:24:36.321968 extend-filesystems[1573]: Resized partition /dev/vda9 Feb 13 15:24:36.299368 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:24:36.323224 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:24:36.308655 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:24:36.318389 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:24:36.324998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:24:36.327685 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:24:36.367650 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:24:36.367979 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:24:36.368352 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:24:36.368645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:24:36.372052 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:24:36.373449 jq[1598]: true Feb 13 15:24:36.377174 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:24:36.401640 update_engine[1595]: I20250213 15:24:36.373982 1595 main.cc:92] Flatcar Update Engine starting Feb 13 15:24:36.401640 update_engine[1595]: I20250213 15:24:36.377588 1595 update_check_scheduler.cc:74] Next update check in 4m7s Feb 13 15:24:36.373676 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:24:36.394849 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:24:36.402460 jq[1604]: true Feb 13 15:24:36.413684 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:24:36.413684 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:24:36.413684 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:24:36.413267 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:24:36.419556 tar[1601]: linux-amd64/helm Feb 13 15:24:36.419792 extend-filesystems[1573]: Resized filesystem in /dev/vda9 Feb 13 15:24:36.416106 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:24:36.417888 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:24:36.425078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:24:36.425115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:24:36.427191 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:24:36.427208 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:24:36.429985 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:24:36.440307 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:24:36.449192 systemd-logind[1588]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:24:36.452183 systemd-logind[1588]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:24:36.453465 systemd-logind[1588]: New seat seat0. Feb 13 15:24:36.456470 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:24:36.477611 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:24:36.479973 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:24:36.482051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:24:36.487590 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:24:36.698955 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:24:36.732254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:24:36.741497 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:24:36.749926 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:24:36.750290 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:24:36.756662 containerd[1605]: time="2025-02-13T15:24:36.756559187Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:24:36.761703 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:24:36.774504 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:24:36.779102 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:24:36.781531 containerd[1605]: time="2025-02-13T15:24:36.779721404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.783608 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:24:36.785038 containerd[1605]: time="2025-02-13T15:24:36.784985647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785038 containerd[1605]: time="2025-02-13T15:24:36.785031643Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:24:36.785109 containerd[1605]: time="2025-02-13T15:24:36.785053855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:24:36.785331 containerd[1605]: time="2025-02-13T15:24:36.785303693Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:24:36.785331 containerd[1605]: time="2025-02-13T15:24:36.785328550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785501 containerd[1605]: time="2025-02-13T15:24:36.785396567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785501 containerd[1605]: time="2025-02-13T15:24:36.785484332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785762 containerd[1605]: time="2025-02-13T15:24:36.785733379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785762 containerd[1605]: time="2025-02-13T15:24:36.785755621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785810 containerd[1605]: time="2025-02-13T15:24:36.785769697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785810 containerd[1605]: time="2025-02-13T15:24:36.785780277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.786277 containerd[1605]: time="2025-02-13T15:24:36.785871247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.785948 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:24:36.787400 containerd[1605]: time="2025-02-13T15:24:36.787083481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:36.787400 containerd[1605]: time="2025-02-13T15:24:36.787334331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:36.787400 containerd[1605]: time="2025-02-13T15:24:36.787349410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:24:36.787482 containerd[1605]: time="2025-02-13T15:24:36.787449217Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:24:36.787524 containerd[1605]: time="2025-02-13T15:24:36.787502497Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:24:36.794455 containerd[1605]: time="2025-02-13T15:24:36.794428896Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:24:36.794499 containerd[1605]: time="2025-02-13T15:24:36.794483288Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:24:36.794524 containerd[1605]: time="2025-02-13T15:24:36.794500009Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:24:36.794524 containerd[1605]: time="2025-02-13T15:24:36.794517773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:24:36.794569 containerd[1605]: time="2025-02-13T15:24:36.794534344Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:24:36.794693 containerd[1605]: time="2025-02-13T15:24:36.794670680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:24:36.795039 containerd[1605]: time="2025-02-13T15:24:36.795015115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:24:36.795180 containerd[1605]: time="2025-02-13T15:24:36.795160258Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:24:36.795208 containerd[1605]: time="2025-02-13T15:24:36.795180656Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:24:36.795208 containerd[1605]: time="2025-02-13T15:24:36.795196285Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795210462Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795224578Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795238625Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795253873Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795269643Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795286204Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795309 containerd[1605]: time="2025-02-13T15:24:36.795303326Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795318845Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795344994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795359792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795374860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795389918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795402873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795430 containerd[1605]: time="2025-02-13T15:24:36.795425695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795438970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795456303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795469538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795485918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795498452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795509693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795521415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795534419Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:24:36.795554 containerd[1605]: time="2025-02-13T15:24:36.795554848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795568002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795578973Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795625330Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795644275Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795657821Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795672398Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795683890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795700811Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:24:36.795714 containerd[1605]: time="2025-02-13T15:24:36.795713445Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:24:36.795868 containerd[1605]: time="2025-02-13T15:24:36.795725017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:24:36.796047 containerd[1605]: time="2025-02-13T15:24:36.795999992Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:24:36.796205 containerd[1605]: time="2025-02-13T15:24:36.796053302Z" level=info msg="Connect containerd service" Feb 13 15:24:36.796205 containerd[1605]: time="2025-02-13T15:24:36.796093107Z" level=info msg="using legacy CRI server" Feb 13 15:24:36.796205 containerd[1605]: time="2025-02-13T15:24:36.796102605Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:24:36.796289 containerd[1605]: time="2025-02-13T15:24:36.796270590Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:24:36.796834 containerd[1605]: time="2025-02-13T15:24:36.796803419Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797087923Z" level=info msg="Start subscribing containerd event" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797158465Z" level=info msg="Start recovering state" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797199883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797230230Z" level=info msg="Start event monitor" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797259865Z" level=info msg="Start snapshots syncer" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797269974Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797278330Z" level=info msg="Start streaming server" Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797295722Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:24:36.800153 containerd[1605]: time="2025-02-13T15:24:36.797471252Z" level=info msg="containerd successfully booted in 0.042440s" Feb 13 15:24:36.797619 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:24:36.884287 systemd-networkd[1250]: eth0: Gained IPv6LL Feb 13 15:24:36.888110 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:24:36.889971 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:24:36.898549 tar[1601]: linux-amd64/LICENSE Feb 13 15:24:36.898549 tar[1601]: linux-amd64/README.md Feb 13 15:24:36.898425 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:24:36.901626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:36.904480 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:24:36.916022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:24:36.928416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:24:36.930526 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:24:36.930927 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:24:36.935247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:24:37.537031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:37.539057 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:24:37.541523 systemd[1]: Startup finished in 6.120s (kernel) + 4.194s (userspace) = 10.315s. Feb 13 15:24:37.542859 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:24:38.125191 kubelet[1707]: E0213 15:24:38.125072 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:24:38.129278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:24:38.129542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:24:45.497727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:24:45.507384 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:53274.service - OpenSSH per-connection server daemon (10.0.0.1:53274). Feb 13 15:24:45.550549 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 53274 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:45.552440 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:45.561164 systemd-logind[1588]: New session 1 of user core. Feb 13 15:24:45.562350 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:24:45.575326 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:24:45.588270 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:24:45.598428 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:24:45.601377 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:24:45.707588 systemd[1726]: Queued start job for default target default.target. Feb 13 15:24:45.707963 systemd[1726]: Created slice app.slice - User Application Slice. Feb 13 15:24:45.707985 systemd[1726]: Reached target paths.target - Paths. Feb 13 15:24:45.707998 systemd[1726]: Reached target timers.target - Timers. Feb 13 15:24:45.723210 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:24:45.729259 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:24:45.729315 systemd[1726]: Reached target sockets.target - Sockets. Feb 13 15:24:45.729328 systemd[1726]: Reached target basic.target - Basic System. Feb 13 15:24:45.729363 systemd[1726]: Reached target default.target - Main User Target. Feb 13 15:24:45.729392 systemd[1726]: Startup finished in 121ms. Feb 13 15:24:45.729897 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:24:45.731402 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:24:45.791713 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:53284.service - OpenSSH per-connection server daemon (10.0.0.1:53284). Feb 13 15:24:45.829617 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 53284 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:45.831006 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:45.834807 systemd-logind[1588]: New session 2 of user core. Feb 13 15:24:45.844400 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:24:45.896860 sshd[1742]: Connection closed by 10.0.0.1 port 53284 Feb 13 15:24:45.897257 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:45.911430 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:53300.service - OpenSSH per-connection server daemon (10.0.0.1:53300). Feb 13 15:24:45.911900 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:53284.service: Deactivated successfully. Feb 13 15:24:45.914354 systemd-logind[1588]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:24:45.915338 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:24:45.916288 systemd-logind[1588]: Removed session 2. Feb 13 15:24:45.948951 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 53300 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:45.950403 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:45.954194 systemd-logind[1588]: New session 3 of user core. Feb 13 15:24:45.964368 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:24:46.012946 sshd[1750]: Connection closed by 10.0.0.1 port 53300 Feb 13 15:24:46.013243 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:46.021351 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:53302.service - OpenSSH per-connection server daemon (10.0.0.1:53302). Feb 13 15:24:46.021836 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:53300.service: Deactivated successfully. Feb 13 15:24:46.023635 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:24:46.024230 systemd-logind[1588]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:24:46.025274 systemd-logind[1588]: Removed session 3. Feb 13 15:24:46.058802 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 53302 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:46.060357 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:46.064159 systemd-logind[1588]: New session 4 of user core. Feb 13 15:24:46.074381 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:24:46.126949 sshd[1758]: Connection closed by 10.0.0.1 port 53302 Feb 13 15:24:46.127534 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:46.136374 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). Feb 13 15:24:46.136846 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:53302.service: Deactivated successfully. Feb 13 15:24:46.139445 systemd-logind[1588]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:24:46.140549 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:24:46.141685 systemd-logind[1588]: Removed session 4. Feb 13 15:24:46.175109 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:46.176686 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:46.180529 systemd-logind[1588]: New session 5 of user core. Feb 13 15:24:46.194447 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:24:46.253024 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:24:46.253389 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:46.273973 sudo[1767]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:46.275822 sshd[1766]: Connection closed by 10.0.0.1 port 53312 Feb 13 15:24:46.276329 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:46.288363 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:53314.service - OpenSSH per-connection server daemon (10.0.0.1:53314). Feb 13 15:24:46.288811 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:53312.service: Deactivated successfully. Feb 13 15:24:46.291184 systemd-logind[1588]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:24:46.292377 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:24:46.293377 systemd-logind[1588]: Removed session 5. Feb 13 15:24:46.327906 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 53314 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:46.329528 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:46.333530 systemd-logind[1588]: New session 6 of user core. Feb 13 15:24:46.343434 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:24:46.397975 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:24:46.398326 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:46.402097 sudo[1777]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:46.408207 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:24:46.408540 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:46.426393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:46.454584 augenrules[1799]: No rules Feb 13 15:24:46.456338 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:46.456712 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:46.458006 sudo[1776]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:46.459594 sshd[1775]: Connection closed by 10.0.0.1 port 53314 Feb 13 15:24:46.459938 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:46.478447 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:53326.service - OpenSSH per-connection server daemon (10.0.0.1:53326). Feb 13 15:24:46.479040 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:53314.service: Deactivated successfully. Feb 13 15:24:46.480821 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:24:46.481513 systemd-logind[1588]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:24:46.482712 systemd-logind[1588]: Removed session 6. Feb 13 15:24:46.515307 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 53326 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:46.516754 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:46.520754 systemd-logind[1588]: New session 7 of user core. Feb 13 15:24:46.530393 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:24:46.583266 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:24:46.583584 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:46.847361 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:24:46.847659 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:24:47.082939 dockerd[1832]: time="2025-02-13T15:24:47.082869425Z" level=info msg="Starting up" Feb 13 15:24:47.667613 dockerd[1832]: time="2025-02-13T15:24:47.667547304Z" level=info msg="Loading containers: start." Feb 13 15:24:47.861173 kernel: Initializing XFRM netlink socket Feb 13 15:24:47.938970 systemd-networkd[1250]: docker0: Link UP Feb 13 15:24:47.984538 dockerd[1832]: time="2025-02-13T15:24:47.984490847Z" level=info msg="Loading containers: done." Feb 13 15:24:47.999159 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2673824823-merged.mount: Deactivated successfully. Feb 13 15:24:48.000930 dockerd[1832]: time="2025-02-13T15:24:48.000888087Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:24:48.001005 dockerd[1832]: time="2025-02-13T15:24:48.000988125Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:24:48.001197 dockerd[1832]: time="2025-02-13T15:24:48.001120623Z" level=info msg="Daemon has completed initialization" Feb 13 15:24:48.038813 dockerd[1832]: time="2025-02-13T15:24:48.038744812Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:24:48.039008 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:24:48.152887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:24:48.168466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:48.318328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:48.323085 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:24:48.374918 kubelet[2040]: E0213 15:24:48.374842 2040 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:24:48.384089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:24:48.384541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:24:49.179383 containerd[1605]: time="2025-02-13T15:24:49.179329095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:24:50.087066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970699079.mount: Deactivated successfully. Feb 13 15:24:51.691949 containerd[1605]: time="2025-02-13T15:24:51.691892708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:51.692676 containerd[1605]: time="2025-02-13T15:24:51.692639228Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:24:51.693957 containerd[1605]: time="2025-02-13T15:24:51.693910452Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:51.697210 containerd[1605]: time="2025-02-13T15:24:51.697157712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:51.698447 containerd[1605]: time="2025-02-13T15:24:51.698413778Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.51904016s" Feb 13 15:24:51.698493 containerd[1605]: time="2025-02-13T15:24:51.698452290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:24:51.726728 containerd[1605]: time="2025-02-13T15:24:51.726693051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:24:53.821925 containerd[1605]: time="2025-02-13T15:24:53.821862292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:53.822735 containerd[1605]: time="2025-02-13T15:24:53.822696616Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:24:53.823926 containerd[1605]: time="2025-02-13T15:24:53.823894102Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:53.826521 containerd[1605]: time="2025-02-13T15:24:53.826487626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:53.827721 containerd[1605]: time="2025-02-13T15:24:53.827681725Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 2.100952206s" Feb 13 15:24:53.827761 containerd[1605]: time="2025-02-13T15:24:53.827730386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:24:53.851023 containerd[1605]: time="2025-02-13T15:24:53.850987171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:24:55.560889 containerd[1605]: time="2025-02-13T15:24:55.560813676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:55.561724 containerd[1605]: time="2025-02-13T15:24:55.561655886Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:24:55.563083 containerd[1605]: time="2025-02-13T15:24:55.563041474Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:55.566991 containerd[1605]: time="2025-02-13T15:24:55.566952569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:55.568062 containerd[1605]: time="2025-02-13T15:24:55.568001296Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.716963409s" Feb 13 15:24:55.568062 containerd[1605]: time="2025-02-13T15:24:55.568058653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:24:55.594161 containerd[1605]: time="2025-02-13T15:24:55.594102354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:24:56.807648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792775332.mount: Deactivated successfully. Feb 13 15:24:57.479345 containerd[1605]: time="2025-02-13T15:24:57.479276479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:57.480234 containerd[1605]: time="2025-02-13T15:24:57.480136191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:24:57.481599 containerd[1605]: time="2025-02-13T15:24:57.481548991Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:57.483565 containerd[1605]: time="2025-02-13T15:24:57.483539554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:24:57.484138 containerd[1605]: time="2025-02-13T15:24:57.484112959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.889974146s" Feb 13 15:24:57.484212 containerd[1605]: time="2025-02-13T15:24:57.484155759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:24:57.505821 containerd[1605]: time="2025-02-13T15:24:57.505784341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:24:58.158851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846890637.mount: Deactivated successfully. Feb 13 15:24:58.402811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:24:58.411983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:58.581976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:58.587860 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:24:58.633415 kubelet[2180]: E0213 15:24:58.633336 2180 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:24:58.638628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:24:58.639069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:00.286953 containerd[1605]: time="2025-02-13T15:25:00.286885328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.319906 containerd[1605]: time="2025-02-13T15:25:00.319824310Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:25:00.339490 containerd[1605]: time="2025-02-13T15:25:00.339455105Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.354157 containerd[1605]: time="2025-02-13T15:25:00.354091373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.355052 containerd[1605]: time="2025-02-13T15:25:00.355015907Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.849194617s" Feb 13 15:25:00.355090 containerd[1605]: time="2025-02-13T15:25:00.355052285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:25:00.377473 containerd[1605]: time="2025-02-13T15:25:00.377435411Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:25:00.878865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317874992.mount: Deactivated successfully. Feb 13 15:25:00.884192 containerd[1605]: time="2025-02-13T15:25:00.884131577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.884914 containerd[1605]: time="2025-02-13T15:25:00.884852409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:25:00.886200 containerd[1605]: time="2025-02-13T15:25:00.886123363Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.888350 containerd[1605]: time="2025-02-13T15:25:00.888317938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:00.888967 containerd[1605]: time="2025-02-13T15:25:00.888938352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 511.467895ms" Feb 13 15:25:00.889023 containerd[1605]: time="2025-02-13T15:25:00.888968589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:25:00.911970 containerd[1605]: time="2025-02-13T15:25:00.911929849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:25:01.445574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008622152.mount: Deactivated successfully. Feb 13 15:25:03.283040 containerd[1605]: time="2025-02-13T15:25:03.282969715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:03.284097 containerd[1605]: time="2025-02-13T15:25:03.284028571Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:25:03.285537 containerd[1605]: time="2025-02-13T15:25:03.285476947Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:03.289749 containerd[1605]: time="2025-02-13T15:25:03.289698304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:03.290975 containerd[1605]: time="2025-02-13T15:25:03.290944771Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.378980928s" Feb 13 15:25:03.291024 containerd[1605]: time="2025-02-13T15:25:03.290977693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:25:05.427628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:05.446383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:05.463040 systemd[1]: Reloading requested from client PID 2366 ('systemctl') (unit session-7.scope)... Feb 13 15:25:05.463055 systemd[1]: Reloading... Feb 13 15:25:05.540207 zram_generator::config[2408]: No configuration found. Feb 13 15:25:05.695952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:25:05.770041 systemd[1]: Reloading finished in 306 ms. Feb 13 15:25:05.832318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:05.836434 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:25:05.836928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:05.849610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:06.003959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:06.016724 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:25:06.061835 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:25:06.061835 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:25:06.061835 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:25:06.062115 kubelet[2468]: I0213 15:25:06.061911 2468 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:25:06.279848 kubelet[2468]: I0213 15:25:06.279730 2468 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:25:06.279848 kubelet[2468]: I0213 15:25:06.279761 2468 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:25:06.280019 kubelet[2468]: I0213 15:25:06.280011 2468 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:25:06.295309 kubelet[2468]: E0213 15:25:06.295272 2468 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.295971 kubelet[2468]: I0213 15:25:06.295931 2468 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:25:06.309670 kubelet[2468]: I0213 15:25:06.309642 2468 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:25:06.311040 kubelet[2468]: I0213 15:25:06.311003 2468 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:25:06.311238 kubelet[2468]: I0213 15:25:06.311204 2468 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:25:06.311667 kubelet[2468]: I0213 15:25:06.311637 2468 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:25:06.311667 kubelet[2468]: I0213 15:25:06.311654 2468 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:25:06.311801 kubelet[2468]: I0213 15:25:06.311776 2468 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:25:06.311899 kubelet[2468]: I0213 15:25:06.311876 2468 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:25:06.311899 kubelet[2468]: I0213 15:25:06.311897 2468 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:25:06.311943 kubelet[2468]: I0213 15:25:06.311931 2468 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:25:06.311973 kubelet[2468]: I0213 15:25:06.311948 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:25:06.313448 kubelet[2468]: I0213 15:25:06.313379 2468 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:25:06.314442 kubelet[2468]: W0213 15:25:06.314355 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.314613 kubelet[2468]: E0213 15:25:06.314537 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.315495 kubelet[2468]: W0213 15:25:06.315443 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.315495 kubelet[2468]: E0213 15:25:06.315488 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.315952 kubelet[2468]: I0213 15:25:06.315925 2468 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:25:06.316009 kubelet[2468]: W0213 15:25:06.315984 2468 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:25:06.316569 kubelet[2468]: I0213 15:25:06.316548 2468 server.go:1256] "Started kubelet" Feb 13 15:25:06.316804 kubelet[2468]: I0213 15:25:06.316642 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:25:06.317790 kubelet[2468]: I0213 15:25:06.316972 2468 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:25:06.317790 kubelet[2468]: I0213 15:25:06.317029 2468 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:25:06.317790 kubelet[2468]: I0213 15:25:06.317597 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:25:06.318021 kubelet[2468]: I0213 15:25:06.317991 2468 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:25:06.326265 kubelet[2468]: E0213 15:25:06.326231 2468 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdeebae38faf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:25:06.316529583 +0000 UTC m=+0.295453582,LastTimestamp:2025-02-13 15:25:06.316529583 +0000 UTC m=+0.295453582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:25:06.326265 kubelet[2468]: I0213 15:25:06.326241 2468 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:25:06.327297 kubelet[2468]: I0213 15:25:06.326406 2468 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:25:06.327297 kubelet[2468]: E0213 15:25:06.326482 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Feb 13 15:25:06.327297 kubelet[2468]: I0213 15:25:06.326500 2468 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:25:06.327297 kubelet[2468]: W0213 15:25:06.327015 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.327297 kubelet[2468]: E0213 15:25:06.327081 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.328925 kubelet[2468]: I0213 15:25:06.328899 2468 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:25:06.329098 kubelet[2468]: I0213 15:25:06.329067 2468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:25:06.329771 kubelet[2468]: E0213 15:25:06.329743 2468 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:25:06.330611 kubelet[2468]: I0213 15:25:06.330582 2468 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:25:06.342735 kubelet[2468]: I0213 15:25:06.342685 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:25:06.344277 kubelet[2468]: I0213 15:25:06.344243 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:25:06.344323 kubelet[2468]: I0213 15:25:06.344286 2468 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:25:06.344350 kubelet[2468]: I0213 15:25:06.344339 2468 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:25:06.344441 kubelet[2468]: E0213 15:25:06.344424 2468 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:25:06.349040 kubelet[2468]: W0213 15:25:06.348604 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.349040 kubelet[2468]: E0213 15:25:06.348643 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:06.356331 kubelet[2468]: I0213 15:25:06.356168 2468 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:25:06.356331 kubelet[2468]: I0213 15:25:06.356193 2468 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:25:06.356331 kubelet[2468]: I0213 15:25:06.356211 2468 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:25:06.427693 kubelet[2468]: I0213 15:25:06.427651 2468 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:06.428200 kubelet[2468]: E0213 15:25:06.428169 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:25:06.445557 kubelet[2468]: E0213 15:25:06.445437 2468 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:25:06.527111 kubelet[2468]: E0213 15:25:06.527072 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Feb 13 15:25:06.629784 kubelet[2468]: I0213 15:25:06.629659 2468 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:06.630281 kubelet[2468]: E0213 15:25:06.630247 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:25:06.646524 kubelet[2468]: E0213 15:25:06.646472 2468 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:25:06.867125 kubelet[2468]: I0213 15:25:06.867017 2468 policy_none.go:49] "None policy: Start" Feb 13 15:25:06.868099 kubelet[2468]: I0213 15:25:06.868067 2468 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:25:06.868099 kubelet[2468]: I0213 15:25:06.868100 2468 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:25:06.877137 kubelet[2468]: I0213 15:25:06.877087 2468 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:25:06.877576 kubelet[2468]: I0213 15:25:06.877539 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:25:06.879092 kubelet[2468]: E0213 15:25:06.879064 2468 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:25:06.927969 kubelet[2468]: E0213 15:25:06.927909 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Feb 13 15:25:07.032046 kubelet[2468]: I0213 15:25:07.031986 2468 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:07.032424 kubelet[2468]: E0213 15:25:07.032386 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:25:07.047657 kubelet[2468]: I0213 15:25:07.047591 2468 topology_manager.go:215] "Topology Admit Handler" podUID="ab6d2185da3d20690ebc2f04827edc4a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:25:07.048796 kubelet[2468]: I0213 15:25:07.048746 2468 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:25:07.049466 kubelet[2468]: I0213 15:25:07.049440 2468 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:25:07.130503 kubelet[2468]: I0213 15:25:07.130449 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:07.130503 kubelet[2468]: I0213 15:25:07.130524 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:07.131003 kubelet[2468]: I0213 15:25:07.130611 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:07.131003 kubelet[2468]: I0213 15:25:07.130675 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:07.131003 kubelet[2468]: I0213 15:25:07.130746 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:07.131003 kubelet[2468]: I0213 15:25:07.130776 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:07.131003 kubelet[2468]: I0213 15:25:07.130807 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:07.131166 kubelet[2468]: I0213 15:25:07.130833 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:07.131166 kubelet[2468]: I0213 15:25:07.130858 2468 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:25:07.192506 kubelet[2468]: E0213 15:25:07.192353 2468 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdeebae38faf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:25:06.316529583 +0000 UTC m=+0.295453582,LastTimestamp:2025-02-13 15:25:06.316529583 +0000 UTC m=+0.295453582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:25:07.354805 kubelet[2468]: E0213 15:25:07.354763 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:07.355368 containerd[1605]: time="2025-02-13T15:25:07.355323582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab6d2185da3d20690ebc2f04827edc4a,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:07.356602 kubelet[2468]: E0213 15:25:07.356575 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:07.356957 kubelet[2468]: E0213 15:25:07.356933 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:07.357154 containerd[1605]: time="2025-02-13T15:25:07.357103741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:07.357285 containerd[1605]: time="2025-02-13T15:25:07.357256187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:07.402713 kubelet[2468]: W0213 15:25:07.402664 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.402713 kubelet[2468]: E0213 15:25:07.402709 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.418117 kubelet[2468]: W0213 15:25:07.418094 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.418117 kubelet[2468]: E0213 15:25:07.418119 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.580672 kubelet[2468]: W0213 15:25:07.580492 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.580672 kubelet[2468]: E0213 15:25:07.580581 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.729051 kubelet[2468]: E0213 15:25:07.729002 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Feb 13 15:25:07.816793 kubelet[2468]: W0213 15:25:07.816732 2468 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.816793 kubelet[2468]: E0213 15:25:07.816793 2468 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:07.834046 kubelet[2468]: I0213 15:25:07.833989 2468 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:07.834354 kubelet[2468]: E0213 15:25:07.834321 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Feb 13 15:25:08.075821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685952757.mount: Deactivated successfully. Feb 13 15:25:08.080376 containerd[1605]: time="2025-02-13T15:25:08.080317815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:08.084195 containerd[1605]: time="2025-02-13T15:25:08.084076544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:25:08.085277 containerd[1605]: time="2025-02-13T15:25:08.085229817Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:08.087235 containerd[1605]: time="2025-02-13T15:25:08.087197247Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:08.088043 containerd[1605]: time="2025-02-13T15:25:08.087991156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:25:08.088973 containerd[1605]: time="2025-02-13T15:25:08.088909859Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:08.089857 containerd[1605]: time="2025-02-13T15:25:08.089775091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:25:08.090889 containerd[1605]: time="2025-02-13T15:25:08.090861408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:08.091592 containerd[1605]: time="2025-02-13T15:25:08.091567623Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.139995ms" Feb 13 15:25:08.095096 containerd[1605]: time="2025-02-13T15:25:08.095044263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.730678ms" Feb 13 15:25:08.099320 containerd[1605]: time="2025-02-13T15:25:08.099237847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 742.027707ms" Feb 13 15:25:08.284592 containerd[1605]: time="2025-02-13T15:25:08.284473620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:08.284592 containerd[1605]: time="2025-02-13T15:25:08.284529064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:08.284592 containerd[1605]: time="2025-02-13T15:25:08.284564070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.285502 containerd[1605]: time="2025-02-13T15:25:08.284676290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.291902 containerd[1605]: time="2025-02-13T15:25:08.291760045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:08.291902 containerd[1605]: time="2025-02-13T15:25:08.291836689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:08.291902 containerd[1605]: time="2025-02-13T15:25:08.291857518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.292858 containerd[1605]: time="2025-02-13T15:25:08.292480266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:08.292858 containerd[1605]: time="2025-02-13T15:25:08.292615539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:08.292858 containerd[1605]: time="2025-02-13T15:25:08.292641568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.292858 containerd[1605]: time="2025-02-13T15:25:08.292772293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.294641 containerd[1605]: time="2025-02-13T15:25:08.294550458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:08.401563 containerd[1605]: time="2025-02-13T15:25:08.400985191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"00f6214d2d073c148c275318b45273d3e136160a182e1ce7bd8218afe7da4375\"" Feb 13 15:25:08.403063 kubelet[2468]: E0213 15:25:08.402917 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:08.409287 containerd[1605]: time="2025-02-13T15:25:08.409245612Z" level=info msg="CreateContainer within sandbox \"00f6214d2d073c148c275318b45273d3e136160a182e1ce7bd8218afe7da4375\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:25:08.409411 containerd[1605]: time="2025-02-13T15:25:08.409297039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ab6d2185da3d20690ebc2f04827edc4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e40ac0eea06009b28f529fb1fc7fdba0c780da1b51ae45e9fdd9823720e2f338\"" Feb 13 15:25:08.410079 kubelet[2468]: E0213 15:25:08.410015 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:08.411083 containerd[1605]: time="2025-02-13T15:25:08.411057631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb5082f43857b5068b3cf0c1e80b92e8ebfd493a62e8a333519ca826fed6d24\"" Feb 13 15:25:08.412509 kubelet[2468]: E0213 15:25:08.412480 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:08.412945 containerd[1605]: time="2025-02-13T15:25:08.412923450Z" level=info msg="CreateContainer within sandbox \"e40ac0eea06009b28f529fb1fc7fdba0c780da1b51ae45e9fdd9823720e2f338\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:25:08.414264 containerd[1605]: time="2025-02-13T15:25:08.414236342Z" level=info msg="CreateContainer within sandbox \"6eb5082f43857b5068b3cf0c1e80b92e8ebfd493a62e8a333519ca826fed6d24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:25:08.431548 kubelet[2468]: E0213 15:25:08.431491 2468 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Feb 13 15:25:08.439199 containerd[1605]: time="2025-02-13T15:25:08.439119877Z" level=info msg="CreateContainer within sandbox \"e40ac0eea06009b28f529fb1fc7fdba0c780da1b51ae45e9fdd9823720e2f338\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c17bf3abdc23675c15b01ff21871dba5c7a9c157f75f73fdf04ed959218e64ed\"" Feb 13 15:25:08.440136 containerd[1605]: time="2025-02-13T15:25:08.440107239Z" level=info msg="StartContainer for \"c17bf3abdc23675c15b01ff21871dba5c7a9c157f75f73fdf04ed959218e64ed\"" Feb 13 15:25:08.445242 containerd[1605]: time="2025-02-13T15:25:08.445179251Z" level=info msg="CreateContainer within sandbox \"00f6214d2d073c148c275318b45273d3e136160a182e1ce7bd8218afe7da4375\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"246309cae7be180966bab53b4f5312fdca8d7d7a740d3f66c8feff5c6a2c71c4\"" Feb 13 15:25:08.446850 containerd[1605]: time="2025-02-13T15:25:08.445785818Z" level=info msg="StartContainer for \"246309cae7be180966bab53b4f5312fdca8d7d7a740d3f66c8feff5c6a2c71c4\"" Feb 13 15:25:08.447740 containerd[1605]: time="2025-02-13T15:25:08.447714295Z" level=info msg="CreateContainer within sandbox \"6eb5082f43857b5068b3cf0c1e80b92e8ebfd493a62e8a333519ca826fed6d24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bc54f5356fb5cd7be0b0624bec4cb72cf0c79079397601ac716c081050639e66\"" Feb 13 15:25:08.448692 containerd[1605]: time="2025-02-13T15:25:08.448110989Z" level=info msg="StartContainer for \"bc54f5356fb5cd7be0b0624bec4cb72cf0c79079397601ac716c081050639e66\"" Feb 13 15:25:08.529312 containerd[1605]: time="2025-02-13T15:25:08.529237250Z" level=info msg="StartContainer for \"c17bf3abdc23675c15b01ff21871dba5c7a9c157f75f73fdf04ed959218e64ed\" returns successfully" Feb 13 15:25:08.538344 containerd[1605]: time="2025-02-13T15:25:08.538292902Z" level=info msg="StartContainer for \"bc54f5356fb5cd7be0b0624bec4cb72cf0c79079397601ac716c081050639e66\" returns successfully" Feb 13 15:25:08.547038 containerd[1605]: time="2025-02-13T15:25:08.546997526Z" level=info msg="StartContainer for \"246309cae7be180966bab53b4f5312fdca8d7d7a740d3f66c8feff5c6a2c71c4\" returns successfully" Feb 13 15:25:09.394280 kubelet[2468]: E0213 15:25:09.364559 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:09.394280 kubelet[2468]: E0213 15:25:09.365893 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:09.394280 kubelet[2468]: E0213 15:25:09.368545 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:09.436300 kubelet[2468]: I0213 15:25:09.436256 2468 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:09.795773 kubelet[2468]: E0213 15:25:09.795725 2468 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:25:09.846803 kubelet[2468]: I0213 15:25:09.846750 2468 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:25:09.858124 kubelet[2468]: E0213 15:25:09.858080 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:09.958461 kubelet[2468]: E0213 15:25:09.958397 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.058771 kubelet[2468]: E0213 15:25:10.058647 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.159252 kubelet[2468]: E0213 15:25:10.159215 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.259749 kubelet[2468]: E0213 15:25:10.259695 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.360059 kubelet[2468]: E0213 15:25:10.359935 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.369518 kubelet[2468]: E0213 15:25:10.369498 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:10.460497 kubelet[2468]: E0213 15:25:10.460446 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.561106 kubelet[2468]: E0213 15:25:10.561039 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:10.661697 kubelet[2468]: E0213 15:25:10.661638 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:25:11.316612 kubelet[2468]: I0213 15:25:11.316558 2468 apiserver.go:52] "Watching apiserver" Feb 13 15:25:11.326960 kubelet[2468]: I0213 15:25:11.326916 2468 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:25:11.806310 kubelet[2468]: E0213 15:25:11.806270 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:12.371332 kubelet[2468]: E0213 15:25:12.371303 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:12.884648 kubelet[2468]: E0213 15:25:12.884615 2468 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:12.887640 systemd[1]: Reloading requested from client PID 2751 ('systemctl') (unit session-7.scope)... Feb 13 15:25:12.887660 systemd[1]: Reloading... Feb 13 15:25:12.972183 zram_generator::config[2794]: No configuration found. Feb 13 15:25:13.087707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:25:13.168775 systemd[1]: Reloading finished in 280 ms. Feb 13 15:25:13.202625 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:13.228732 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:25:13.229196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:13.239379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:13.381514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:13.386262 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:25:13.438707 kubelet[2845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:25:13.438707 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:25:13.438707 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:25:13.439089 kubelet[2845]: I0213 15:25:13.438713 2845 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:25:13.443530 kubelet[2845]: I0213 15:25:13.443508 2845 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:25:13.444380 kubelet[2845]: I0213 15:25:13.443597 2845 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:25:13.465287 kubelet[2845]: I0213 15:25:13.465244 2845 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:25:13.466732 kubelet[2845]: I0213 15:25:13.466706 2845 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:25:13.469533 kubelet[2845]: I0213 15:25:13.469335 2845 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:25:13.477546 kubelet[2845]: I0213 15:25:13.477511 2845 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:25:13.478080 kubelet[2845]: I0213 15:25:13.478056 2845 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:25:13.478263 kubelet[2845]: I0213 15:25:13.478238 2845 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:25:13.478348 kubelet[2845]: I0213 15:25:13.478267 2845 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:25:13.478348 kubelet[2845]: I0213 15:25:13.478278 2845 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:25:13.478348 kubelet[2845]: I0213 15:25:13.478317 2845 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:25:13.478427 kubelet[2845]: I0213 15:25:13.478409 2845 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:25:13.478453 kubelet[2845]: I0213 15:25:13.478440 2845 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:25:13.478484 kubelet[2845]: I0213 15:25:13.478474 2845 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:25:13.478505 kubelet[2845]: I0213 15:25:13.478492 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:25:13.479419 kubelet[2845]: I0213 15:25:13.479389 2845 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:25:13.480871 kubelet[2845]: I0213 15:25:13.479574 2845 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:25:13.480871 kubelet[2845]: I0213 15:25:13.479927 2845 server.go:1256] "Started kubelet" Feb 13 15:25:13.480871 kubelet[2845]: I0213 15:25:13.480134 2845 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:25:13.481477 kubelet[2845]: I0213 15:25:13.481450 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:25:13.484663 kubelet[2845]: I0213 15:25:13.484632 2845 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:25:13.486603 kubelet[2845]: I0213 15:25:13.485722 2845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:25:13.486603 kubelet[2845]: I0213 15:25:13.485909 2845 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:25:13.486603 kubelet[2845]: I0213 15:25:13.486009 2845 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:25:13.486603 kubelet[2845]: I0213 15:25:13.486099 2845 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:25:13.486603 kubelet[2845]: I0213 15:25:13.486280 2845 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:25:13.491986 kubelet[2845]: I0213 15:25:13.491963 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:25:13.493534 kubelet[2845]: I0213 15:25:13.493512 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:25:13.493589 kubelet[2845]: I0213 15:25:13.493549 2845 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:25:13.493589 kubelet[2845]: I0213 15:25:13.493581 2845 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:25:13.493653 kubelet[2845]: E0213 15:25:13.493640 2845 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:25:13.495049 kubelet[2845]: I0213 15:25:13.495019 2845 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:25:13.495526 kubelet[2845]: I0213 15:25:13.495170 2845 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:25:13.502201 kubelet[2845]: E0213 15:25:13.502173 2845 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:25:13.503466 kubelet[2845]: I0213 15:25:13.503449 2845 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:25:13.551357 kubelet[2845]: I0213 15:25:13.551023 2845 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:25:13.551357 kubelet[2845]: I0213 15:25:13.551048 2845 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:25:13.551357 kubelet[2845]: I0213 15:25:13.551070 2845 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.551568 2845 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.551594 2845 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.551602 2845 policy_none.go:49] "None policy: Start" Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.552289 2845 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.552316 2845 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:25:13.552737 kubelet[2845]: I0213 15:25:13.552583 2845 state_mem.go:75] "Updated machine memory state" Feb 13 15:25:13.554049 kubelet[2845]: I0213 15:25:13.554021 2845 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:25:13.554989 kubelet[2845]: I0213 15:25:13.554971 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:25:13.591509 kubelet[2845]: I0213 15:25:13.591469 2845 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:25:13.594704 kubelet[2845]: I0213 15:25:13.594650 2845 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:25:13.594835 kubelet[2845]: I0213 15:25:13.594758 2845 topology_manager.go:215] "Topology Admit Handler" podUID="ab6d2185da3d20690ebc2f04827edc4a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:25:13.594835 kubelet[2845]: I0213 15:25:13.594805 2845 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:25:13.674855 kubelet[2845]: E0213 15:25:13.674804 2845 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:25:13.675373 kubelet[2845]: E0213 15:25:13.675334 2845 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:13.676350 kubelet[2845]: I0213 15:25:13.676308 2845 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:25:13.676430 kubelet[2845]: I0213 15:25:13.676395 2845 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:25:13.787410 kubelet[2845]: I0213 15:25:13.787107 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:13.787410 kubelet[2845]: I0213 15:25:13.787170 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:13.787410 kubelet[2845]: I0213 15:25:13.787202 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:13.787410 kubelet[2845]: I0213 15:25:13.787228 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:13.787410 kubelet[2845]: I0213 15:25:13.787252 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:13.787696 kubelet[2845]: I0213 15:25:13.787392 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:13.787696 kubelet[2845]: I0213 15:25:13.787450 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:25:13.787696 kubelet[2845]: I0213 15:25:13.787488 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:25:13.787696 kubelet[2845]: I0213 15:25:13.787511 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab6d2185da3d20690ebc2f04827edc4a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ab6d2185da3d20690ebc2f04827edc4a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:13.976573 kubelet[2845]: E0213 15:25:13.976262 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:13.976573 kubelet[2845]: E0213 15:25:13.976428 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:13.976573 kubelet[2845]: E0213 15:25:13.976558 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:14.479210 kubelet[2845]: I0213 15:25:14.479164 2845 apiserver.go:52] "Watching apiserver" Feb 13 15:25:14.487272 kubelet[2845]: I0213 15:25:14.487232 2845 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:25:14.516072 kubelet[2845]: E0213 15:25:14.515643 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:14.536109 kubelet[2845]: E0213 15:25:14.536046 2845 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:25:14.536665 kubelet[2845]: E0213 15:25:14.536452 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:14.638645 kubelet[2845]: E0213 15:25:14.638595 2845 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:25:14.639114 kubelet[2845]: E0213 15:25:14.639091 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:14.658762 kubelet[2845]: I0213 15:25:14.658703 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.658657509 podStartE2EDuration="1.658657509s" podCreationTimestamp="2025-02-13 15:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:14.65811797 +0000 UTC m=+1.267482162" watchObservedRunningTime="2025-02-13 15:25:14.658657509 +0000 UTC m=+1.268021691" Feb 13 15:25:14.862006 kubelet[2845]: I0213 15:25:14.861363 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.861324063 podStartE2EDuration="2.861324063s" podCreationTimestamp="2025-02-13 15:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:14.730014982 +0000 UTC m=+1.339379174" watchObservedRunningTime="2025-02-13 15:25:14.861324063 +0000 UTC m=+1.470688245" Feb 13 15:25:14.862006 kubelet[2845]: I0213 15:25:14.861508 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.861493816 podStartE2EDuration="3.861493816s" podCreationTimestamp="2025-02-13 15:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:14.861257645 +0000 UTC m=+1.470621827" watchObservedRunningTime="2025-02-13 15:25:14.861493816 +0000 UTC m=+1.470857998" Feb 13 15:25:15.517459 kubelet[2845]: E0213 15:25:15.517178 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:15.518948 kubelet[2845]: E0213 15:25:15.517523 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:17.880341 kubelet[2845]: E0213 15:25:17.880298 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:18.506830 sudo[1812]: pam_unix(sudo:session): session closed for user root Feb 13 15:25:18.508426 sshd[1811]: Connection closed by 10.0.0.1 port 53326 Feb 13 15:25:18.509906 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:18.514779 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:53326.service: Deactivated successfully. Feb 13 15:25:18.517161 systemd-logind[1588]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:25:18.517270 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:25:18.518418 systemd-logind[1588]: Removed session 7. Feb 13 15:25:20.525109 kubelet[2845]: E0213 15:25:20.525073 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:21.228892 update_engine[1595]: I20250213 15:25:21.228783 1595 update_attempter.cc:509] Updating boot flags... Feb 13 15:25:21.269250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2938) Feb 13 15:25:21.307234 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2936) Feb 13 15:25:21.336340 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2936) Feb 13 15:25:21.524369 kubelet[2845]: E0213 15:25:21.524233 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:22.791174 kubelet[2845]: E0213 15:25:22.791119 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:23.527435 kubelet[2845]: E0213 15:25:23.527393 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:27.345367 kubelet[2845]: I0213 15:25:27.345324 2845 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:25:27.345895 containerd[1605]: time="2025-02-13T15:25:27.345796921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:25:27.346201 kubelet[2845]: I0213 15:25:27.346009 2845 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:25:27.885786 kubelet[2845]: E0213 15:25:27.885732 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:28.260060 kubelet[2845]: I0213 15:25:28.259931 2845 topology_manager.go:215] "Topology Admit Handler" podUID="cced0d9c-2aa5-4779-98fd-d02c75da8d83" podNamespace="kube-system" podName="kube-proxy-tdgfr" Feb 13 15:25:28.273772 kubelet[2845]: I0213 15:25:28.273720 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cced0d9c-2aa5-4779-98fd-d02c75da8d83-lib-modules\") pod \"kube-proxy-tdgfr\" (UID: \"cced0d9c-2aa5-4779-98fd-d02c75da8d83\") " pod="kube-system/kube-proxy-tdgfr" Feb 13 15:25:28.273772 kubelet[2845]: I0213 15:25:28.273767 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cced0d9c-2aa5-4779-98fd-d02c75da8d83-kube-proxy\") pod \"kube-proxy-tdgfr\" (UID: \"cced0d9c-2aa5-4779-98fd-d02c75da8d83\") " pod="kube-system/kube-proxy-tdgfr" Feb 13 15:25:28.273994 kubelet[2845]: I0213 15:25:28.273796 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cced0d9c-2aa5-4779-98fd-d02c75da8d83-xtables-lock\") pod \"kube-proxy-tdgfr\" (UID: \"cced0d9c-2aa5-4779-98fd-d02c75da8d83\") " pod="kube-system/kube-proxy-tdgfr" Feb 13 15:25:28.273994 kubelet[2845]: I0213 15:25:28.273824 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdh8\" (UniqueName: \"kubernetes.io/projected/cced0d9c-2aa5-4779-98fd-d02c75da8d83-kube-api-access-dbdh8\") pod \"kube-proxy-tdgfr\" (UID: \"cced0d9c-2aa5-4779-98fd-d02c75da8d83\") " pod="kube-system/kube-proxy-tdgfr" Feb 13 15:25:28.540264 kubelet[2845]: I0213 15:25:28.539571 2845 topology_manager.go:215] "Topology Admit Handler" podUID="88dd5f3b-2104-4c70-852d-0983739cb844" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-c84x8" Feb 13 15:25:28.577567 kubelet[2845]: E0213 15:25:28.577499 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:28.584413 containerd[1605]: time="2025-02-13T15:25:28.580697608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdgfr,Uid:cced0d9c-2aa5-4779-98fd-d02c75da8d83,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:28.684980 kubelet[2845]: I0213 15:25:28.684845 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/88dd5f3b-2104-4c70-852d-0983739cb844-var-lib-calico\") pod \"tigera-operator-c7ccbd65-c84x8\" (UID: \"88dd5f3b-2104-4c70-852d-0983739cb844\") " pod="tigera-operator/tigera-operator-c7ccbd65-c84x8" Feb 13 15:25:28.684980 kubelet[2845]: I0213 15:25:28.684918 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88sf4\" (UniqueName: \"kubernetes.io/projected/88dd5f3b-2104-4c70-852d-0983739cb844-kube-api-access-88sf4\") pod \"tigera-operator-c7ccbd65-c84x8\" (UID: \"88dd5f3b-2104-4c70-852d-0983739cb844\") " pod="tigera-operator/tigera-operator-c7ccbd65-c84x8" Feb 13 15:25:28.687578 containerd[1605]: time="2025-02-13T15:25:28.685930658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:28.687578 containerd[1605]: time="2025-02-13T15:25:28.686005840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:28.687578 containerd[1605]: time="2025-02-13T15:25:28.686028503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:28.687578 containerd[1605]: time="2025-02-13T15:25:28.686187302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:28.786523 containerd[1605]: time="2025-02-13T15:25:28.783720495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tdgfr,Uid:cced0d9c-2aa5-4779-98fd-d02c75da8d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"7049a16ccd45a164a65c505f47a6e9058f6075c36c4ff6e56b635fa0fed8edcf\"" Feb 13 15:25:28.786667 kubelet[2845]: E0213 15:25:28.784753 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:28.801702 containerd[1605]: time="2025-02-13T15:25:28.801416406Z" level=info msg="CreateContainer within sandbox \"7049a16ccd45a164a65c505f47a6e9058f6075c36c4ff6e56b635fa0fed8edcf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:25:28.853683 containerd[1605]: time="2025-02-13T15:25:28.853630120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-c84x8,Uid:88dd5f3b-2104-4c70-852d-0983739cb844,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:25:28.895875 containerd[1605]: time="2025-02-13T15:25:28.895720402Z" level=info msg="CreateContainer within sandbox \"7049a16ccd45a164a65c505f47a6e9058f6075c36c4ff6e56b635fa0fed8edcf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b557ce38bf1b58561f77eaafcbda3bf99516dc41ecece75ee7d1083b35288044\"" Feb 13 15:25:28.903122 containerd[1605]: time="2025-02-13T15:25:28.897462401Z" level=info msg="StartContainer for \"b557ce38bf1b58561f77eaafcbda3bf99516dc41ecece75ee7d1083b35288044\"" Feb 13 15:25:28.985342 containerd[1605]: time="2025-02-13T15:25:28.984958932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:28.985342 containerd[1605]: time="2025-02-13T15:25:28.985052038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:28.985342 containerd[1605]: time="2025-02-13T15:25:28.985072807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:28.985342 containerd[1605]: time="2025-02-13T15:25:28.985225085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:29.139855 containerd[1605]: time="2025-02-13T15:25:29.139725645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-c84x8,Uid:88dd5f3b-2104-4c70-852d-0983739cb844,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"61b1d6581669873fcfac7e3d54ca9712c42ded6845dd057b689122533a41b838\"" Feb 13 15:25:29.149988 containerd[1605]: time="2025-02-13T15:25:29.149733633Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:25:29.158428 containerd[1605]: time="2025-02-13T15:25:29.157307869Z" level=info msg="StartContainer for \"b557ce38bf1b58561f77eaafcbda3bf99516dc41ecece75ee7d1083b35288044\" returns successfully" Feb 13 15:25:29.553319 kubelet[2845]: E0213 15:25:29.553264 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:29.589327 kubelet[2845]: I0213 15:25:29.588953 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tdgfr" podStartSLOduration=1.588902938 podStartE2EDuration="1.588902938s" podCreationTimestamp="2025-02-13 15:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:29.587672495 +0000 UTC m=+16.197036677" watchObservedRunningTime="2025-02-13 15:25:29.588902938 +0000 UTC m=+16.198267130" Feb 13 15:25:33.302801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746641005.mount: Deactivated successfully. Feb 13 15:25:34.144400 containerd[1605]: time="2025-02-13T15:25:34.144318422Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:34.158578 containerd[1605]: time="2025-02-13T15:25:34.158519976Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:25:34.176737 containerd[1605]: time="2025-02-13T15:25:34.176680932Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:34.229215 containerd[1605]: time="2025-02-13T15:25:34.229159752Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:34.229942 containerd[1605]: time="2025-02-13T15:25:34.229902302Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.080125847s" Feb 13 15:25:34.229980 containerd[1605]: time="2025-02-13T15:25:34.229944601Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:25:34.231506 containerd[1605]: time="2025-02-13T15:25:34.231470225Z" level=info msg="CreateContainer within sandbox \"61b1d6581669873fcfac7e3d54ca9712c42ded6845dd057b689122533a41b838\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:25:34.586848 containerd[1605]: time="2025-02-13T15:25:34.586797734Z" level=info msg="CreateContainer within sandbox \"61b1d6581669873fcfac7e3d54ca9712c42ded6845dd057b689122533a41b838\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"99389afbbd0e5e2f22a8c7896761400553bf575a4c59fc21a906f3e8349ede63\"" Feb 13 15:25:34.587247 containerd[1605]: time="2025-02-13T15:25:34.587222855Z" level=info msg="StartContainer for \"99389afbbd0e5e2f22a8c7896761400553bf575a4c59fc21a906f3e8349ede63\"" Feb 13 15:25:34.797723 containerd[1605]: time="2025-02-13T15:25:34.797650362Z" level=info msg="StartContainer for \"99389afbbd0e5e2f22a8c7896761400553bf575a4c59fc21a906f3e8349ede63\" returns successfully" Feb 13 15:25:35.621631 kubelet[2845]: I0213 15:25:35.621593 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-c84x8" podStartSLOduration=2.540376754 podStartE2EDuration="7.621552902s" podCreationTimestamp="2025-02-13 15:25:28 +0000 UTC" firstStartedPulling="2025-02-13 15:25:29.149074119 +0000 UTC m=+15.758438301" lastFinishedPulling="2025-02-13 15:25:34.230250267 +0000 UTC m=+20.839614449" observedRunningTime="2025-02-13 15:25:35.621357464 +0000 UTC m=+22.230721646" watchObservedRunningTime="2025-02-13 15:25:35.621552902 +0000 UTC m=+22.230917084" Feb 13 15:25:37.482021 kubelet[2845]: I0213 15:25:37.481968 2845 topology_manager.go:215] "Topology Admit Handler" podUID="bbca4e58-87c0-4500-aff5-ebbe5e7c7aae" podNamespace="calico-system" podName="calico-typha-587bd87bfd-9njqw" Feb 13 15:25:37.537864 kubelet[2845]: I0213 15:25:37.537123 2845 topology_manager.go:215] "Topology Admit Handler" podUID="b176a865-d239-4152-808d-eedd5ddf6f4c" podNamespace="calico-system" podName="calico-node-mp9n6" Feb 13 15:25:37.561165 kubelet[2845]: I0213 15:25:37.560075 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbca4e58-87c0-4500-aff5-ebbe5e7c7aae-tigera-ca-bundle\") pod \"calico-typha-587bd87bfd-9njqw\" (UID: \"bbca4e58-87c0-4500-aff5-ebbe5e7c7aae\") " pod="calico-system/calico-typha-587bd87bfd-9njqw" Feb 13 15:25:37.561165 kubelet[2845]: I0213 15:25:37.560129 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-policysync\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561165 kubelet[2845]: I0213 15:25:37.560174 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b176a865-d239-4152-808d-eedd5ddf6f4c-tigera-ca-bundle\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561165 kubelet[2845]: I0213 15:25:37.560257 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-var-run-calico\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561165 kubelet[2845]: I0213 15:25:37.560321 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzsw\" (UniqueName: \"kubernetes.io/projected/b176a865-d239-4152-808d-eedd5ddf6f4c-kube-api-access-4fzsw\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561465 kubelet[2845]: I0213 15:25:37.560389 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-flexvol-driver-host\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561465 kubelet[2845]: I0213 15:25:37.560416 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b176a865-d239-4152-808d-eedd5ddf6f4c-node-certs\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561465 kubelet[2845]: I0213 15:25:37.560442 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj978\" (UniqueName: \"kubernetes.io/projected/bbca4e58-87c0-4500-aff5-ebbe5e7c7aae-kube-api-access-bj978\") pod \"calico-typha-587bd87bfd-9njqw\" (UID: \"bbca4e58-87c0-4500-aff5-ebbe5e7c7aae\") " pod="calico-system/calico-typha-587bd87bfd-9njqw" Feb 13 15:25:37.561465 kubelet[2845]: I0213 15:25:37.560466 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-cni-bin-dir\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561465 kubelet[2845]: I0213 15:25:37.560489 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-xtables-lock\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561609 kubelet[2845]: I0213 15:25:37.560513 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-cni-net-dir\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561609 kubelet[2845]: I0213 15:25:37.560536 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bbca4e58-87c0-4500-aff5-ebbe5e7c7aae-typha-certs\") pod \"calico-typha-587bd87bfd-9njqw\" (UID: \"bbca4e58-87c0-4500-aff5-ebbe5e7c7aae\") " pod="calico-system/calico-typha-587bd87bfd-9njqw" Feb 13 15:25:37.561609 kubelet[2845]: I0213 15:25:37.560560 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-lib-modules\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561609 kubelet[2845]: I0213 15:25:37.560582 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-var-lib-calico\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.561609 kubelet[2845]: I0213 15:25:37.560603 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b176a865-d239-4152-808d-eedd5ddf6f4c-cni-log-dir\") pod \"calico-node-mp9n6\" (UID: \"b176a865-d239-4152-808d-eedd5ddf6f4c\") " pod="calico-system/calico-node-mp9n6" Feb 13 15:25:37.663304 kubelet[2845]: E0213 15:25:37.663271 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.663304 kubelet[2845]: W0213 15:25:37.663294 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.663304 kubelet[2845]: E0213 15:25:37.663313 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.667260 kubelet[2845]: E0213 15:25:37.667228 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.667260 kubelet[2845]: W0213 15:25:37.667254 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.667333 kubelet[2845]: E0213 15:25:37.667280 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.667493 kubelet[2845]: E0213 15:25:37.667482 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.667493 kubelet[2845]: W0213 15:25:37.667491 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.667549 kubelet[2845]: E0213 15:25:37.667501 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.762079 kubelet[2845]: E0213 15:25:37.761941 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.762079 kubelet[2845]: W0213 15:25:37.761965 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.762079 kubelet[2845]: E0213 15:25:37.761988 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.762461 kubelet[2845]: E0213 15:25:37.762424 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.762461 kubelet[2845]: W0213 15:25:37.762452 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.762645 kubelet[2845]: E0213 15:25:37.762487 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.864079 kubelet[2845]: E0213 15:25:37.864043 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.864079 kubelet[2845]: W0213 15:25:37.864070 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.864265 kubelet[2845]: E0213 15:25:37.864094 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.864404 kubelet[2845]: E0213 15:25:37.864380 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.864404 kubelet[2845]: W0213 15:25:37.864396 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.864453 kubelet[2845]: E0213 15:25:37.864410 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.914595 kubelet[2845]: E0213 15:25:37.914268 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.914595 kubelet[2845]: W0213 15:25:37.914290 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.914595 kubelet[2845]: E0213 15:25:37.914367 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:37.914787 kubelet[2845]: E0213 15:25:37.914671 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:37.914787 kubelet[2845]: W0213 15:25:37.914680 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:37.914787 kubelet[2845]: E0213 15:25:37.914720 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.047530 kubelet[2845]: I0213 15:25:38.047401 2845 topology_manager.go:215] "Topology Admit Handler" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" podNamespace="calico-system" podName="csi-node-driver-hctxf" Feb 13 15:25:38.047710 kubelet[2845]: E0213 15:25:38.047652 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:38.066195 kubelet[2845]: E0213 15:25:38.066154 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.066195 kubelet[2845]: W0213 15:25:38.066177 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.066195 kubelet[2845]: E0213 15:25:38.066199 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.066513 kubelet[2845]: E0213 15:25:38.066496 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.066513 kubelet[2845]: W0213 15:25:38.066509 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.066564 kubelet[2845]: E0213 15:25:38.066522 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.066739 kubelet[2845]: E0213 15:25:38.066726 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.066739 kubelet[2845]: W0213 15:25:38.066737 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.066792 kubelet[2845]: E0213 15:25:38.066749 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.066951 kubelet[2845]: E0213 15:25:38.066940 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.066976 kubelet[2845]: W0213 15:25:38.066951 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.066976 kubelet[2845]: E0213 15:25:38.066962 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.067174 kubelet[2845]: E0213 15:25:38.067161 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.067174 kubelet[2845]: W0213 15:25:38.067170 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.067253 kubelet[2845]: E0213 15:25:38.067179 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.067384 kubelet[2845]: E0213 15:25:38.067374 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.067384 kubelet[2845]: W0213 15:25:38.067382 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.067432 kubelet[2845]: E0213 15:25:38.067391 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.067557 kubelet[2845]: E0213 15:25:38.067541 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.067557 kubelet[2845]: W0213 15:25:38.067549 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.067597 kubelet[2845]: E0213 15:25:38.067558 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.067708 kubelet[2845]: E0213 15:25:38.067698 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.067708 kubelet[2845]: W0213 15:25:38.067708 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.067754 kubelet[2845]: E0213 15:25:38.067716 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.067895 kubelet[2845]: E0213 15:25:38.067885 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.067895 kubelet[2845]: W0213 15:25:38.067893 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.067941 kubelet[2845]: E0213 15:25:38.067902 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068054 kubelet[2845]: E0213 15:25:38.068045 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068054 kubelet[2845]: W0213 15:25:38.068052 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.068095 kubelet[2845]: E0213 15:25:38.068062 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068221 kubelet[2845]: E0213 15:25:38.068211 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068221 kubelet[2845]: W0213 15:25:38.068219 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.068291 kubelet[2845]: E0213 15:25:38.068236 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068409 kubelet[2845]: E0213 15:25:38.068398 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068409 kubelet[2845]: W0213 15:25:38.068407 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.068470 kubelet[2845]: E0213 15:25:38.068420 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068634 kubelet[2845]: E0213 15:25:38.068621 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068634 kubelet[2845]: W0213 15:25:38.068631 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.068696 kubelet[2845]: E0213 15:25:38.068642 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068822 kubelet[2845]: E0213 15:25:38.068813 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068822 kubelet[2845]: W0213 15:25:38.068821 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.068866 kubelet[2845]: E0213 15:25:38.068830 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.068987 kubelet[2845]: E0213 15:25:38.068978 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.068987 kubelet[2845]: W0213 15:25:38.068985 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069032 kubelet[2845]: E0213 15:25:38.068995 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.069168 kubelet[2845]: E0213 15:25:38.069157 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.069168 kubelet[2845]: W0213 15:25:38.069165 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069237 kubelet[2845]: E0213 15:25:38.069174 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.069383 kubelet[2845]: E0213 15:25:38.069374 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.069383 kubelet[2845]: W0213 15:25:38.069382 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069431 kubelet[2845]: E0213 15:25:38.069391 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.069546 kubelet[2845]: E0213 15:25:38.069531 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.069546 kubelet[2845]: W0213 15:25:38.069539 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069546 kubelet[2845]: E0213 15:25:38.069547 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.069701 kubelet[2845]: E0213 15:25:38.069691 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.069701 kubelet[2845]: W0213 15:25:38.069700 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069752 kubelet[2845]: E0213 15:25:38.069710 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.069858 kubelet[2845]: E0213 15:25:38.069849 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.069858 kubelet[2845]: W0213 15:25:38.069857 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.069901 kubelet[2845]: E0213 15:25:38.069865 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.070093 kubelet[2845]: E0213 15:25:38.070083 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.070093 kubelet[2845]: W0213 15:25:38.070091 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.070172 kubelet[2845]: E0213 15:25:38.070100 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.070172 kubelet[2845]: I0213 15:25:38.070126 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4c1f8e5-b232-41b8-a095-6576678dbe57-registration-dir\") pod \"csi-node-driver-hctxf\" (UID: \"f4c1f8e5-b232-41b8-a095-6576678dbe57\") " pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:38.070330 kubelet[2845]: E0213 15:25:38.070318 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.070330 kubelet[2845]: W0213 15:25:38.070327 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.070397 kubelet[2845]: E0213 15:25:38.070342 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.070397 kubelet[2845]: I0213 15:25:38.070359 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f4c1f8e5-b232-41b8-a095-6576678dbe57-varrun\") pod \"csi-node-driver-hctxf\" (UID: \"f4c1f8e5-b232-41b8-a095-6576678dbe57\") " pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:38.070669 kubelet[2845]: E0213 15:25:38.070642 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.070712 kubelet[2845]: W0213 15:25:38.070667 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.070712 kubelet[2845]: E0213 15:25:38.070693 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.070897 kubelet[2845]: E0213 15:25:38.070871 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.070897 kubelet[2845]: W0213 15:25:38.070885 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.070897 kubelet[2845]: E0213 15:25:38.070904 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.071100 kubelet[2845]: E0213 15:25:38.071088 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.071100 kubelet[2845]: W0213 15:25:38.071097 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.071203 kubelet[2845]: E0213 15:25:38.071112 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.071203 kubelet[2845]: I0213 15:25:38.071131 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4c1f8e5-b232-41b8-a095-6576678dbe57-kubelet-dir\") pod \"csi-node-driver-hctxf\" (UID: \"f4c1f8e5-b232-41b8-a095-6576678dbe57\") " pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:38.071336 kubelet[2845]: E0213 15:25:38.071322 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.071336 kubelet[2845]: W0213 15:25:38.071332 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.071388 kubelet[2845]: E0213 15:25:38.071346 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.071388 kubelet[2845]: I0213 15:25:38.071362 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72cn\" (UniqueName: \"kubernetes.io/projected/f4c1f8e5-b232-41b8-a095-6576678dbe57-kube-api-access-w72cn\") pod \"csi-node-driver-hctxf\" (UID: \"f4c1f8e5-b232-41b8-a095-6576678dbe57\") " pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:38.071553 kubelet[2845]: E0213 15:25:38.071539 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.071553 kubelet[2845]: W0213 15:25:38.071550 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.071609 kubelet[2845]: E0213 15:25:38.071579 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.071609 kubelet[2845]: I0213 15:25:38.071607 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f4c1f8e5-b232-41b8-a095-6576678dbe57-socket-dir\") pod \"csi-node-driver-hctxf\" (UID: \"f4c1f8e5-b232-41b8-a095-6576678dbe57\") " pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:38.071719 kubelet[2845]: E0213 15:25:38.071706 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.071719 kubelet[2845]: W0213 15:25:38.071716 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.071777 kubelet[2845]: E0213 15:25:38.071743 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.071887 kubelet[2845]: E0213 15:25:38.071875 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.071887 kubelet[2845]: W0213 15:25:38.071884 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.071943 kubelet[2845]: E0213 15:25:38.071897 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.072092 kubelet[2845]: E0213 15:25:38.072073 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.072092 kubelet[2845]: W0213 15:25:38.072086 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.072189 kubelet[2845]: E0213 15:25:38.072104 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.072316 kubelet[2845]: E0213 15:25:38.072302 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.072316 kubelet[2845]: W0213 15:25:38.072314 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.072366 kubelet[2845]: E0213 15:25:38.072328 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.072534 kubelet[2845]: E0213 15:25:38.072523 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.072559 kubelet[2845]: W0213 15:25:38.072533 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.072559 kubelet[2845]: E0213 15:25:38.072542 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.072713 kubelet[2845]: E0213 15:25:38.072703 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.072713 kubelet[2845]: W0213 15:25:38.072712 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.072761 kubelet[2845]: E0213 15:25:38.072721 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.072891 kubelet[2845]: E0213 15:25:38.072881 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.072891 kubelet[2845]: W0213 15:25:38.072889 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.072934 kubelet[2845]: E0213 15:25:38.072898 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.073067 kubelet[2845]: E0213 15:25:38.073057 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.073067 kubelet[2845]: W0213 15:25:38.073065 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.073120 kubelet[2845]: E0213 15:25:38.073074 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.095371 kubelet[2845]: E0213 15:25:38.095334 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:38.095849 containerd[1605]: time="2025-02-13T15:25:38.095811036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-587bd87bfd-9njqw,Uid:bbca4e58-87c0-4500-aff5-ebbe5e7c7aae,Namespace:calico-system,Attempt:0,}" Feb 13 15:25:38.144611 kubelet[2845]: E0213 15:25:38.144575 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:38.145277 containerd[1605]: time="2025-02-13T15:25:38.145217270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp9n6,Uid:b176a865-d239-4152-808d-eedd5ddf6f4c,Namespace:calico-system,Attempt:0,}" Feb 13 15:25:38.172945 kubelet[2845]: E0213 15:25:38.172922 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.172945 kubelet[2845]: W0213 15:25:38.172943 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.173038 kubelet[2845]: E0213 15:25:38.173022 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.173589 kubelet[2845]: E0213 15:25:38.173375 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.173589 kubelet[2845]: W0213 15:25:38.173388 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.173589 kubelet[2845]: E0213 15:25:38.173441 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.173832 kubelet[2845]: E0213 15:25:38.173817 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.173832 kubelet[2845]: W0213 15:25:38.173829 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.173886 kubelet[2845]: E0213 15:25:38.173852 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.174610 kubelet[2845]: E0213 15:25:38.174129 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.174610 kubelet[2845]: W0213 15:25:38.174161 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.174610 kubelet[2845]: E0213 15:25:38.174195 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.174610 kubelet[2845]: E0213 15:25:38.174516 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.174610 kubelet[2845]: W0213 15:25:38.174525 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.174610 kubelet[2845]: E0213 15:25:38.174554 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.177464 kubelet[2845]: E0213 15:25:38.175069 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.177464 kubelet[2845]: W0213 15:25:38.175082 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.177464 kubelet[2845]: E0213 15:25:38.175600 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.177464 kubelet[2845]: E0213 15:25:38.175807 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.177464 kubelet[2845]: W0213 15:25:38.175814 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.177464 kubelet[2845]: E0213 15:25:38.177284 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.177464 kubelet[2845]: W0213 15:25:38.177301 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.177464 kubelet[2845]: E0213 15:25:38.177433 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.177770 kubelet[2845]: E0213 15:25:38.177585 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.177770 kubelet[2845]: E0213 15:25:38.177630 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.177770 kubelet[2845]: W0213 15:25:38.177640 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.177770 kubelet[2845]: E0213 15:25:38.177678 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.177955 kubelet[2845]: E0213 15:25:38.177920 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.177955 kubelet[2845]: W0213 15:25:38.177933 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.178010 kubelet[2845]: E0213 15:25:38.177963 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.178187 kubelet[2845]: E0213 15:25:38.178122 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.178187 kubelet[2845]: W0213 15:25:38.178131 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.178241 kubelet[2845]: E0213 15:25:38.178203 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.178345 kubelet[2845]: E0213 15:25:38.178335 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.178345 kubelet[2845]: W0213 15:25:38.178343 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.178384 kubelet[2845]: E0213 15:25:38.178360 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.178554 kubelet[2845]: E0213 15:25:38.178544 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.178554 kubelet[2845]: W0213 15:25:38.178552 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.178604 kubelet[2845]: E0213 15:25:38.178566 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.178965 kubelet[2845]: E0213 15:25:38.178768 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.178965 kubelet[2845]: W0213 15:25:38.178777 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.178965 kubelet[2845]: E0213 15:25:38.178831 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.179045 kubelet[2845]: E0213 15:25:38.178978 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.179045 kubelet[2845]: W0213 15:25:38.178986 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.179045 kubelet[2845]: E0213 15:25:38.179009 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.179308 kubelet[2845]: E0213 15:25:38.179202 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.179308 kubelet[2845]: W0213 15:25:38.179214 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.179308 kubelet[2845]: E0213 15:25:38.179241 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.179454 kubelet[2845]: E0213 15:25:38.179438 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.179454 kubelet[2845]: W0213 15:25:38.179448 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.179659 kubelet[2845]: E0213 15:25:38.179644 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.179659 kubelet[2845]: W0213 15:25:38.179654 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.179829 kubelet[2845]: E0213 15:25:38.179816 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.179829 kubelet[2845]: W0213 15:25:38.179826 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.179879 kubelet[2845]: E0213 15:25:38.179837 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.179879 kubelet[2845]: E0213 15:25:38.179871 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.179928 kubelet[2845]: E0213 15:25:38.179899 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.180255 kubelet[2845]: E0213 15:25:38.180237 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.180255 kubelet[2845]: W0213 15:25:38.180251 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.180321 kubelet[2845]: E0213 15:25:38.180268 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.180477 kubelet[2845]: E0213 15:25:38.180462 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.180477 kubelet[2845]: W0213 15:25:38.180473 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.180525 kubelet[2845]: E0213 15:25:38.180488 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.180681 kubelet[2845]: E0213 15:25:38.180667 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.180681 kubelet[2845]: W0213 15:25:38.180678 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.180726 kubelet[2845]: E0213 15:25:38.180706 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.180853 kubelet[2845]: E0213 15:25:38.180840 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.180853 kubelet[2845]: W0213 15:25:38.180850 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.180894 kubelet[2845]: E0213 15:25:38.180861 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.181043 kubelet[2845]: E0213 15:25:38.181030 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.181043 kubelet[2845]: W0213 15:25:38.181040 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.181088 kubelet[2845]: E0213 15:25:38.181052 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.278608 kubelet[2845]: E0213 15:25:38.278564 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.278608 kubelet[2845]: W0213 15:25:38.278582 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.278608 kubelet[2845]: E0213 15:25:38.278600 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.278802 kubelet[2845]: E0213 15:25:38.278788 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.278802 kubelet[2845]: W0213 15:25:38.278798 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.278845 kubelet[2845]: E0213 15:25:38.278809 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.328870 kubelet[2845]: E0213 15:25:38.328760 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.328870 kubelet[2845]: W0213 15:25:38.328784 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.328870 kubelet[2845]: E0213 15:25:38.328804 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.340244 kubelet[2845]: E0213 15:25:38.340122 2845 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:25:38.340244 kubelet[2845]: W0213 15:25:38.340156 2845 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:25:38.340244 kubelet[2845]: E0213 15:25:38.340180 2845 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:25:38.658213 containerd[1605]: time="2025-02-13T15:25:38.657290982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:38.658213 containerd[1605]: time="2025-02-13T15:25:38.657345755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:38.658213 containerd[1605]: time="2025-02-13T15:25:38.657359460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:38.658213 containerd[1605]: time="2025-02-13T15:25:38.657450552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:38.672491 containerd[1605]: time="2025-02-13T15:25:38.672332962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:38.672491 containerd[1605]: time="2025-02-13T15:25:38.672475550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:38.672864 containerd[1605]: time="2025-02-13T15:25:38.672812013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:38.673849 containerd[1605]: time="2025-02-13T15:25:38.673679286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:38.729134 containerd[1605]: time="2025-02-13T15:25:38.729070634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp9n6,Uid:b176a865-d239-4152-808d-eedd5ddf6f4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\"" Feb 13 15:25:38.734699 kubelet[2845]: E0213 15:25:38.734677 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:38.747490 containerd[1605]: time="2025-02-13T15:25:38.747208470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:25:38.754290 containerd[1605]: time="2025-02-13T15:25:38.754248590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-587bd87bfd-9njqw,Uid:bbca4e58-87c0-4500-aff5-ebbe5e7c7aae,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fbde1d253acae69a67af4d28e1fba06afa1df366afbf4aa095f8ae4a10850e3\"" Feb 13 15:25:38.754799 kubelet[2845]: E0213 15:25:38.754781 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:39.501166 kubelet[2845]: E0213 15:25:39.500939 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:40.710708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252811809.mount: Deactivated successfully. Feb 13 15:25:41.074652 containerd[1605]: time="2025-02-13T15:25:41.074533199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:41.077373 containerd[1605]: time="2025-02-13T15:25:41.077304381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:25:41.080262 containerd[1605]: time="2025-02-13T15:25:41.080213934Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:41.085043 containerd[1605]: time="2025-02-13T15:25:41.085008702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:41.085568 containerd[1605]: time="2025-02-13T15:25:41.085528410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.338272921s" Feb 13 15:25:41.085604 containerd[1605]: time="2025-02-13T15:25:41.085566732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:25:41.086053 containerd[1605]: time="2025-02-13T15:25:41.086026387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:25:41.086959 containerd[1605]: time="2025-02-13T15:25:41.086933844Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:25:41.121694 containerd[1605]: time="2025-02-13T15:25:41.121629070Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda\"" Feb 13 15:25:41.122915 containerd[1605]: time="2025-02-13T15:25:41.122267531Z" level=info msg="StartContainer for \"5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda\"" Feb 13 15:25:41.210749 containerd[1605]: time="2025-02-13T15:25:41.210698257Z" level=info msg="StartContainer for \"5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda\" returns successfully" Feb 13 15:25:41.255957 containerd[1605]: time="2025-02-13T15:25:41.255885488Z" level=info msg="shim disconnected" id=5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda namespace=k8s.io Feb 13 15:25:41.255957 containerd[1605]: time="2025-02-13T15:25:41.255954708Z" level=warning msg="cleaning up after shim disconnected" id=5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda namespace=k8s.io Feb 13 15:25:41.255957 containerd[1605]: time="2025-02-13T15:25:41.255966911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:41.495068 kubelet[2845]: E0213 15:25:41.495029 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:41.582246 kubelet[2845]: E0213 15:25:41.582200 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:41.684729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b84b6a308124029c2c4eb3494e0254cc848bef6d0112e27d67921fc8ab56bda-rootfs.mount: Deactivated successfully. Feb 13 15:25:43.494251 kubelet[2845]: E0213 15:25:43.494198 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:44.368359 containerd[1605]: time="2025-02-13T15:25:44.368315171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:44.376809 containerd[1605]: time="2025-02-13T15:25:44.376775384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 15:25:44.389299 containerd[1605]: time="2025-02-13T15:25:44.389267098Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:44.391358 containerd[1605]: time="2025-02-13T15:25:44.391334655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:44.391860 containerd[1605]: time="2025-02-13T15:25:44.391840826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.305789553s" Feb 13 15:25:44.391909 containerd[1605]: time="2025-02-13T15:25:44.391863218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:25:44.392356 containerd[1605]: time="2025-02-13T15:25:44.392325127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:25:44.407174 containerd[1605]: time="2025-02-13T15:25:44.406597988Z" level=info msg="CreateContainer within sandbox \"8fbde1d253acae69a67af4d28e1fba06afa1df366afbf4aa095f8ae4a10850e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:25:44.420090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927263765.mount: Deactivated successfully. Feb 13 15:25:44.424728 containerd[1605]: time="2025-02-13T15:25:44.424689398Z" level=info msg="CreateContainer within sandbox \"8fbde1d253acae69a67af4d28e1fba06afa1df366afbf4aa095f8ae4a10850e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c6b904ebf02e1d1d6a6cb27a38a8a65ed414c141bdd2c9b9be4e5d93b5220ef\"" Feb 13 15:25:44.425249 containerd[1605]: time="2025-02-13T15:25:44.425205528Z" level=info msg="StartContainer for \"2c6b904ebf02e1d1d6a6cb27a38a8a65ed414c141bdd2c9b9be4e5d93b5220ef\"" Feb 13 15:25:44.493900 containerd[1605]: time="2025-02-13T15:25:44.493847107Z" level=info msg="StartContainer for \"2c6b904ebf02e1d1d6a6cb27a38a8a65ed414c141bdd2c9b9be4e5d93b5220ef\" returns successfully" Feb 13 15:25:44.590297 kubelet[2845]: E0213 15:25:44.590267 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:45.494623 kubelet[2845]: E0213 15:25:45.494572 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:45.591018 kubelet[2845]: I0213 15:25:45.590972 2845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:25:45.591588 kubelet[2845]: E0213 15:25:45.591570 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:45.884064 kubelet[2845]: I0213 15:25:45.883946 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-587bd87bfd-9njqw" podStartSLOduration=3.2469181 podStartE2EDuration="8.883905036s" podCreationTimestamp="2025-02-13 15:25:37 +0000 UTC" firstStartedPulling="2025-02-13 15:25:38.755105804 +0000 UTC m=+25.364469986" lastFinishedPulling="2025-02-13 15:25:44.39209274 +0000 UTC m=+31.001456922" observedRunningTime="2025-02-13 15:25:44.60273805 +0000 UTC m=+31.212102232" watchObservedRunningTime="2025-02-13 15:25:45.883905036 +0000 UTC m=+32.493269218" Feb 13 15:25:46.593567 kubelet[2845]: E0213 15:25:46.593530 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:47.495833 kubelet[2845]: E0213 15:25:47.495792 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:47.595786 kubelet[2845]: E0213 15:25:47.595318 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:47.974658 containerd[1605]: time="2025-02-13T15:25:47.974598966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:47.975452 containerd[1605]: time="2025-02-13T15:25:47.975404821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:25:47.976592 containerd[1605]: time="2025-02-13T15:25:47.976557767Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:47.979172 containerd[1605]: time="2025-02-13T15:25:47.979120503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:47.979872 containerd[1605]: time="2025-02-13T15:25:47.979834313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.587482046s" Feb 13 15:25:47.979872 containerd[1605]: time="2025-02-13T15:25:47.979864630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:25:47.981295 containerd[1605]: time="2025-02-13T15:25:47.981255945Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:25:47.996799 containerd[1605]: time="2025-02-13T15:25:47.996748968Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711\"" Feb 13 15:25:47.997335 containerd[1605]: time="2025-02-13T15:25:47.997306316Z" level=info msg="StartContainer for \"7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711\"" Feb 13 15:25:48.060802 containerd[1605]: time="2025-02-13T15:25:48.060758704Z" level=info msg="StartContainer for \"7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711\" returns successfully" Feb 13 15:25:48.598452 kubelet[2845]: E0213 15:25:48.598423 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:49.148047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711-rootfs.mount: Deactivated successfully. Feb 13 15:25:49.178931 kubelet[2845]: I0213 15:25:49.178893 2845 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:25:49.472550 kubelet[2845]: I0213 15:25:49.472503 2845 topology_manager.go:215] "Topology Admit Handler" podUID="89340465-825b-41b6-aad8-3f7e90dab570" podNamespace="kube-system" podName="coredns-76f75df574-cc6ff" Feb 13 15:25:49.540609 kubelet[2845]: I0213 15:25:49.540567 2845 topology_manager.go:215] "Topology Admit Handler" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" podNamespace="calico-apiserver" podName="calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:49.540785 kubelet[2845]: I0213 15:25:49.540713 2845 topology_manager.go:215] "Topology Admit Handler" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" podNamespace="kube-system" podName="coredns-76f75df574-b899m" Feb 13 15:25:49.540815 kubelet[2845]: I0213 15:25:49.540794 2845 topology_manager.go:215] "Topology Admit Handler" podUID="986d0026-8722-498a-9efe-05e1624276c3" podNamespace="calico-system" podName="calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:49.540888 kubelet[2845]: I0213 15:25:49.540871 2845 topology_manager.go:215] "Topology Admit Handler" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" podNamespace="calico-apiserver" podName="calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:49.560194 containerd[1605]: time="2025-02-13T15:25:49.560060101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:0,}" Feb 13 15:25:49.562289 containerd[1605]: time="2025-02-13T15:25:49.561940012Z" level=info msg="shim disconnected" id=7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711 namespace=k8s.io Feb 13 15:25:49.562289 containerd[1605]: time="2025-02-13T15:25:49.561977022Z" level=warning msg="cleaning up after shim disconnected" id=7dc464d12ea5be3fb315f890cd0ebcdd00e3b9cd3a27ec89d024cd8240c0c711 namespace=k8s.io Feb 13 15:25:49.562289 containerd[1605]: time="2025-02-13T15:25:49.561984636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:49.565039 kubelet[2845]: I0213 15:25:49.564991 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89340465-825b-41b6-aad8-3f7e90dab570-config-volume\") pod \"coredns-76f75df574-cc6ff\" (UID: \"89340465-825b-41b6-aad8-3f7e90dab570\") " pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:49.565039 kubelet[2845]: I0213 15:25:49.565040 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd68\" (UniqueName: \"kubernetes.io/projected/89340465-825b-41b6-aad8-3f7e90dab570-kube-api-access-sxd68\") pod \"coredns-76f75df574-cc6ff\" (UID: \"89340465-825b-41b6-aad8-3f7e90dab570\") " pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:49.607084 kubelet[2845]: E0213 15:25:49.606913 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:49.608738 containerd[1605]: time="2025-02-13T15:25:49.608054846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:25:49.649597 containerd[1605]: time="2025-02-13T15:25:49.649547535Z" level=error msg="Failed to destroy network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.649963 containerd[1605]: time="2025-02-13T15:25:49.649939662Z" level=error msg="encountered an error cleaning up failed sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.650032 containerd[1605]: time="2025-02-13T15:25:49.649995206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.650318 kubelet[2845]: E0213 15:25:49.650290 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.650440 kubelet[2845]: E0213 15:25:49.650355 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:49.650440 kubelet[2845]: E0213 15:25:49.650375 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:49.650440 kubelet[2845]: E0213 15:25:49.650427 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:49.652066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74-shm.mount: Deactivated successfully. Feb 13 15:25:49.666045 kubelet[2845]: I0213 15:25:49.666001 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6rt\" (UniqueName: \"kubernetes.io/projected/6773e7e4-f620-4ef0-9bbb-a553c14f7656-kube-api-access-xf6rt\") pod \"coredns-76f75df574-b899m\" (UID: \"6773e7e4-f620-4ef0-9bbb-a553c14f7656\") " pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:49.666164 kubelet[2845]: I0213 15:25:49.666068 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26hp\" (UniqueName: \"kubernetes.io/projected/986d0026-8722-498a-9efe-05e1624276c3-kube-api-access-n26hp\") pod \"calico-kube-controllers-9b8d49698-qwwkp\" (UID: \"986d0026-8722-498a-9efe-05e1624276c3\") " pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:49.666194 kubelet[2845]: I0213 15:25:49.666176 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6773e7e4-f620-4ef0-9bbb-a553c14f7656-config-volume\") pod \"coredns-76f75df574-b899m\" (UID: \"6773e7e4-f620-4ef0-9bbb-a553c14f7656\") " pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:49.666219 kubelet[2845]: I0213 15:25:49.666197 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/986d0026-8722-498a-9efe-05e1624276c3-tigera-ca-bundle\") pod \"calico-kube-controllers-9b8d49698-qwwkp\" (UID: \"986d0026-8722-498a-9efe-05e1624276c3\") " pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:49.666245 kubelet[2845]: I0213 15:25:49.666239 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77fafed8-e30a-4324-8899-dc18d6f5bcb9-calico-apiserver-certs\") pod \"calico-apiserver-5cfc58b57b-p7rtt\" (UID: \"77fafed8-e30a-4324-8899-dc18d6f5bcb9\") " pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:49.666278 kubelet[2845]: I0213 15:25:49.666259 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84ad88b0-6adb-4b60-9716-75d388d2367c-calico-apiserver-certs\") pod \"calico-apiserver-5cfc58b57b-pxnbk\" (UID: \"84ad88b0-6adb-4b60-9716-75d388d2367c\") " pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:49.666303 kubelet[2845]: I0213 15:25:49.666284 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mgb\" (UniqueName: \"kubernetes.io/projected/77fafed8-e30a-4324-8899-dc18d6f5bcb9-kube-api-access-h6mgb\") pod \"calico-apiserver-5cfc58b57b-p7rtt\" (UID: \"77fafed8-e30a-4324-8899-dc18d6f5bcb9\") " pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:49.666330 kubelet[2845]: I0213 15:25:49.666307 2845 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdmq\" (UniqueName: \"kubernetes.io/projected/84ad88b0-6adb-4b60-9716-75d388d2367c-kube-api-access-ljdmq\") pod \"calico-apiserver-5cfc58b57b-pxnbk\" (UID: \"84ad88b0-6adb-4b60-9716-75d388d2367c\") " pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:49.777189 kubelet[2845]: E0213 15:25:49.777067 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:49.778472 containerd[1605]: time="2025-02-13T15:25:49.778174929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:49.844533 containerd[1605]: time="2025-02-13T15:25:49.844475255Z" level=error msg="Failed to destroy network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.844936 containerd[1605]: time="2025-02-13T15:25:49.844898730Z" level=error msg="encountered an error cleaning up failed sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.844997 containerd[1605]: time="2025-02-13T15:25:49.844956689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.845225 kubelet[2845]: E0213 15:25:49.845198 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.845315 kubelet[2845]: E0213 15:25:49.845272 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:49.845315 kubelet[2845]: E0213 15:25:49.845294 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:49.845396 containerd[1605]: time="2025-02-13T15:25:49.845261551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:25:49.845435 kubelet[2845]: E0213 15:25:49.845364 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:49.847153 kubelet[2845]: E0213 15:25:49.847097 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:49.847577 containerd[1605]: time="2025-02-13T15:25:49.847552726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:49.850133 containerd[1605]: time="2025-02-13T15:25:49.850061419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:0,}" Feb 13 15:25:49.853505 containerd[1605]: time="2025-02-13T15:25:49.853448331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:25:49.943044 containerd[1605]: time="2025-02-13T15:25:49.942940075Z" level=error msg="Failed to destroy network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.943416 containerd[1605]: time="2025-02-13T15:25:49.943385922Z" level=error msg="encountered an error cleaning up failed sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.943466 containerd[1605]: time="2025-02-13T15:25:49.943449211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.944267 kubelet[2845]: E0213 15:25:49.943772 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.944267 kubelet[2845]: E0213 15:25:49.943837 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:49.944267 kubelet[2845]: E0213 15:25:49.943862 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:49.944429 kubelet[2845]: E0213 15:25:49.943936 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:49.954398 containerd[1605]: time="2025-02-13T15:25:49.954262826Z" level=error msg="Failed to destroy network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.954772 containerd[1605]: time="2025-02-13T15:25:49.954751324Z" level=error msg="encountered an error cleaning up failed sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.954872 containerd[1605]: time="2025-02-13T15:25:49.954855319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.955210 kubelet[2845]: E0213 15:25:49.955168 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.955276 kubelet[2845]: E0213 15:25:49.955242 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:49.955276 kubelet[2845]: E0213 15:25:49.955264 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:49.955339 kubelet[2845]: E0213 15:25:49.955314 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:49.956420 containerd[1605]: time="2025-02-13T15:25:49.956374173Z" level=error msg="Failed to destroy network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.957462 containerd[1605]: time="2025-02-13T15:25:49.957438401Z" level=error msg="encountered an error cleaning up failed sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.957509 containerd[1605]: time="2025-02-13T15:25:49.957478197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.957738 kubelet[2845]: E0213 15:25:49.957720 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.957929 kubelet[2845]: E0213 15:25:49.957819 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:49.957929 kubelet[2845]: E0213 15:25:49.957842 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:49.957929 kubelet[2845]: E0213 15:25:49.957904 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:49.973607 containerd[1605]: time="2025-02-13T15:25:49.973548817Z" level=error msg="Failed to destroy network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.973950 containerd[1605]: time="2025-02-13T15:25:49.973918452Z" level=error msg="encountered an error cleaning up failed sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.973996 containerd[1605]: time="2025-02-13T15:25:49.973975900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.974243 kubelet[2845]: E0213 15:25:49.974212 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:49.974297 kubelet[2845]: E0213 15:25:49.974261 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:49.974297 kubelet[2845]: E0213 15:25:49.974282 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:49.974345 kubelet[2845]: E0213 15:25:49.974334 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:50.608669 kubelet[2845]: I0213 15:25:50.608641 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74" Feb 13 15:25:50.609335 containerd[1605]: time="2025-02-13T15:25:50.609304669Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:50.609696 containerd[1605]: time="2025-02-13T15:25:50.609532127Z" level=info msg="Ensure that sandbox ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74 in task-service has been cleanup successfully" Feb 13 15:25:50.610065 kubelet[2845]: I0213 15:25:50.609972 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360" Feb 13 15:25:50.610525 containerd[1605]: time="2025-02-13T15:25:50.610499865Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:50.610697 containerd[1605]: time="2025-02-13T15:25:50.610676567Z" level=info msg="Ensure that sandbox 524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360 in task-service has been cleanup successfully" Feb 13 15:25:50.611222 kubelet[2845]: I0213 15:25:50.611201 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5" Feb 13 15:25:50.611714 containerd[1605]: time="2025-02-13T15:25:50.611601574Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:50.611777 containerd[1605]: time="2025-02-13T15:25:50.611745445Z" level=info msg="Ensure that sandbox 215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5 in task-service has been cleanup successfully" Feb 13 15:25:50.612252 containerd[1605]: time="2025-02-13T15:25:50.612229474Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:50.612431 containerd[1605]: time="2025-02-13T15:25:50.612362113Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:50.612431 containerd[1605]: time="2025-02-13T15:25:50.612385497Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:50.612431 containerd[1605]: time="2025-02-13T15:25:50.612397540Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:50.612521 containerd[1605]: time="2025-02-13T15:25:50.612438497Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:50.612521 containerd[1605]: time="2025-02-13T15:25:50.612446962Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:50.615522 kubelet[2845]: I0213 15:25:50.615340 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2" Feb 13 15:25:50.615522 kubelet[2845]: E0213 15:25:50.615371 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:50.615685 systemd[1]: run-netns-cni\x2d5102427d\x2db506\x2d9b74\x2db5d5\x2d299ef23b1abb.mount: Deactivated successfully. Feb 13 15:25:50.615903 systemd[1]: run-netns-cni\x2d34311b8f\x2d2767\x2d4206\x2d9f72\x2d8d9a00888597.mount: Deactivated successfully. Feb 13 15:25:50.616086 systemd[1]: run-netns-cni\x2dc55bd9bb\x2d2ffc\x2d3051\x2d0a50\x2d896345b13c12.mount: Deactivated successfully. Feb 13 15:25:50.616342 containerd[1605]: time="2025-02-13T15:25:50.616312353Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:50.616564 containerd[1605]: time="2025-02-13T15:25:50.616539580Z" level=info msg="Ensure that sandbox 948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2 in task-service has been cleanup successfully" Feb 13 15:25:50.617374 containerd[1605]: time="2025-02-13T15:25:50.616914084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:1,}" Feb 13 15:25:50.617374 containerd[1605]: time="2025-02-13T15:25:50.616967314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:25:50.617719 containerd[1605]: time="2025-02-13T15:25:50.617020473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:1,}" Feb 13 15:25:50.617845 containerd[1605]: time="2025-02-13T15:25:50.617828942Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:50.617903 containerd[1605]: time="2025-02-13T15:25:50.617890488Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:50.618744 containerd[1605]: time="2025-02-13T15:25:50.618588569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:25:50.619279 kubelet[2845]: I0213 15:25:50.619264 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e" Feb 13 15:25:50.620494 containerd[1605]: time="2025-02-13T15:25:50.620202290Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:50.620494 containerd[1605]: time="2025-02-13T15:25:50.620365917Z" level=info msg="Ensure that sandbox e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e in task-service has been cleanup successfully" Feb 13 15:25:50.620451 systemd[1]: run-netns-cni\x2de540a3c0\x2dd736\x2d7bda\x2da458\x2de00ccca1083e.mount: Deactivated successfully. Feb 13 15:25:50.620634 kubelet[2845]: I0213 15:25:50.620237 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84" Feb 13 15:25:50.621481 containerd[1605]: time="2025-02-13T15:25:50.621193592Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:50.621481 containerd[1605]: time="2025-02-13T15:25:50.621339246Z" level=info msg="Ensure that sandbox 2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84 in task-service has been cleanup successfully" Feb 13 15:25:50.621953 containerd[1605]: time="2025-02-13T15:25:50.621759576Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:50.621953 containerd[1605]: time="2025-02-13T15:25:50.621868360Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:50.621953 containerd[1605]: time="2025-02-13T15:25:50.621836650Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:50.621953 containerd[1605]: time="2025-02-13T15:25:50.621919416Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:50.623109 kubelet[2845]: E0213 15:25:50.622227 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:50.623235 containerd[1605]: time="2025-02-13T15:25:50.622463247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:1,}" Feb 13 15:25:50.623235 containerd[1605]: time="2025-02-13T15:25:50.622628668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:1,}" Feb 13 15:25:50.625136 systemd[1]: run-netns-cni\x2d595e47c7\x2dd274\x2d253b\x2d25f4\x2d396c4cc5c377.mount: Deactivated successfully. Feb 13 15:25:50.625348 systemd[1]: run-netns-cni\x2d1c343501\x2dc18a\x2d7979\x2db6e4\x2d6a67d3b5093a.mount: Deactivated successfully. Feb 13 15:25:50.759954 containerd[1605]: time="2025-02-13T15:25:50.759885797Z" level=error msg="Failed to destroy network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.760381 containerd[1605]: time="2025-02-13T15:25:50.760351361Z" level=error msg="encountered an error cleaning up failed sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.760451 containerd[1605]: time="2025-02-13T15:25:50.760423197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.760721 kubelet[2845]: E0213 15:25:50.760695 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.760782 kubelet[2845]: E0213 15:25:50.760753 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:50.760782 kubelet[2845]: E0213 15:25:50.760776 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:50.760832 kubelet[2845]: E0213 15:25:50.760827 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:50.780490 containerd[1605]: time="2025-02-13T15:25:50.780312868Z" level=error msg="Failed to destroy network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.780871 containerd[1605]: time="2025-02-13T15:25:50.780850027Z" level=error msg="encountered an error cleaning up failed sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.780967 containerd[1605]: time="2025-02-13T15:25:50.780949925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.781296 kubelet[2845]: E0213 15:25:50.781272 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.781425 kubelet[2845]: E0213 15:25:50.781414 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:50.781773 kubelet[2845]: E0213 15:25:50.781477 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:50.781773 kubelet[2845]: E0213 15:25:50.781540 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:50.786164 containerd[1605]: time="2025-02-13T15:25:50.786094038Z" level=error msg="Failed to destroy network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.786627 containerd[1605]: time="2025-02-13T15:25:50.786599036Z" level=error msg="encountered an error cleaning up failed sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.786699 containerd[1605]: time="2025-02-13T15:25:50.786673767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.787131 kubelet[2845]: E0213 15:25:50.786909 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.787131 kubelet[2845]: E0213 15:25:50.786952 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:50.787131 kubelet[2845]: E0213 15:25:50.786974 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:50.787242 kubelet[2845]: E0213 15:25:50.787020 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:50.801704 containerd[1605]: time="2025-02-13T15:25:50.801544590Z" level=error msg="Failed to destroy network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.802088 containerd[1605]: time="2025-02-13T15:25:50.802066611Z" level=error msg="encountered an error cleaning up failed sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.802203 containerd[1605]: time="2025-02-13T15:25:50.802184472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.802574 kubelet[2845]: E0213 15:25:50.802545 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.802637 kubelet[2845]: E0213 15:25:50.802604 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:50.802637 kubelet[2845]: E0213 15:25:50.802628 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:50.802719 kubelet[2845]: E0213 15:25:50.802704 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:50.804554 containerd[1605]: time="2025-02-13T15:25:50.804493220Z" level=error msg="Failed to destroy network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.805043 containerd[1605]: time="2025-02-13T15:25:50.805000782Z" level=error msg="encountered an error cleaning up failed sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.805101 containerd[1605]: time="2025-02-13T15:25:50.805069902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.805475 kubelet[2845]: E0213 15:25:50.805282 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.805475 kubelet[2845]: E0213 15:25:50.805362 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:50.805475 kubelet[2845]: E0213 15:25:50.805388 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:50.805714 kubelet[2845]: E0213 15:25:50.805440 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:50.806121 containerd[1605]: time="2025-02-13T15:25:50.806083125Z" level=error msg="Failed to destroy network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.806473 containerd[1605]: time="2025-02-13T15:25:50.806448272Z" level=error msg="encountered an error cleaning up failed sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.806528 containerd[1605]: time="2025-02-13T15:25:50.806494008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.806671 kubelet[2845]: E0213 15:25:50.806643 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:50.806737 kubelet[2845]: E0213 15:25:50.806686 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:50.806737 kubelet[2845]: E0213 15:25:50.806704 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:50.806803 kubelet[2845]: E0213 15:25:50.806741 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:51.153392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491-shm.mount: Deactivated successfully. Feb 13 15:25:51.623086 kubelet[2845]: I0213 15:25:51.622903 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0" Feb 13 15:25:51.623784 containerd[1605]: time="2025-02-13T15:25:51.623565314Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:25:51.624077 containerd[1605]: time="2025-02-13T15:25:51.623979763Z" level=info msg="Ensure that sandbox 2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0 in task-service has been cleanup successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.624221596Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.624236013Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.624598825Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.624696940Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.624710495Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.625122850Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.625347993Z" level=info msg="Ensure that sandbox 8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40 in task-service has been cleanup successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.625629701Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.625673814Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.625987443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.626528460Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.626864521Z" level=info msg="Ensure that sandbox 698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0 in task-service has been cleanup successfully" Feb 13 15:25:51.627034 containerd[1605]: time="2025-02-13T15:25:51.626987672Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:51.626719 systemd[1]: run-netns-cni\x2d7b8522e4\x2de819\x2d4ccb\x2d7e66\x2d9bcce37815d5.mount: Deactivated successfully. Feb 13 15:25:51.627613 kubelet[2845]: I0213 15:25:51.624355 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40" Feb 13 15:25:51.627613 kubelet[2845]: I0213 15:25:51.625996 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0" Feb 13 15:25:51.627667 containerd[1605]: time="2025-02-13T15:25:51.627074786Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:51.627667 containerd[1605]: time="2025-02-13T15:25:51.627087490Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:51.627667 containerd[1605]: time="2025-02-13T15:25:51.627640779Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:25:51.627667 containerd[1605]: time="2025-02-13T15:25:51.627656478Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:25:51.627761 containerd[1605]: time="2025-02-13T15:25:51.627750155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:2,}" Feb 13 15:25:51.629214 containerd[1605]: time="2025-02-13T15:25:51.628079993Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:51.629214 containerd[1605]: time="2025-02-13T15:25:51.628407909Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:51.629214 containerd[1605]: time="2025-02-13T15:25:51.628424711Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:51.629434 kubelet[2845]: I0213 15:25:51.628014 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491" Feb 13 15:25:51.629434 kubelet[2845]: E0213 15:25:51.628729 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:51.629541 containerd[1605]: time="2025-02-13T15:25:51.629289245Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:25:51.629541 containerd[1605]: time="2025-02-13T15:25:51.629439146Z" level=info msg="Ensure that sandbox 40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491 in task-service has been cleanup successfully" Feb 13 15:25:51.629684 containerd[1605]: time="2025-02-13T15:25:51.629664400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:2,}" Feb 13 15:25:51.630132 kubelet[2845]: I0213 15:25:51.630075 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d" Feb 13 15:25:51.630280 containerd[1605]: time="2025-02-13T15:25:51.630259678Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:25:51.630320 containerd[1605]: time="2025-02-13T15:25:51.630279635Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:25:51.631030 containerd[1605]: time="2025-02-13T15:25:51.630972326Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:51.631030 containerd[1605]: time="2025-02-13T15:25:51.631022741Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:25:51.631285 containerd[1605]: time="2025-02-13T15:25:51.631052917Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:51.631285 containerd[1605]: time="2025-02-13T15:25:51.631062295Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:51.631257 systemd[1]: run-netns-cni\x2df080cbbf\x2d853c\x2dae13\x2de3c4\x2d9478d832169b.mount: Deactivated successfully. Feb 13 15:25:51.631470 kubelet[2845]: E0213 15:25:51.631227 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:51.631511 containerd[1605]: time="2025-02-13T15:25:51.631435686Z" level=info msg="Ensure that sandbox bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d in task-service has been cleanup successfully" Feb 13 15:25:51.631426 systemd[1]: run-netns-cni\x2d23a9bc40\x2dee92\x2dadb1\x2dac62\x2d227b2e872a83.mount: Deactivated successfully. Feb 13 15:25:51.631643 containerd[1605]: time="2025-02-13T15:25:51.631596258Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:25:51.631643 containerd[1605]: time="2025-02-13T15:25:51.631609684Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:25:51.632022 containerd[1605]: time="2025-02-13T15:25:51.631856867Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:51.632022 containerd[1605]: time="2025-02-13T15:25:51.631921580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:2,}" Feb 13 15:25:51.632022 containerd[1605]: time="2025-02-13T15:25:51.631952217Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:51.632022 containerd[1605]: time="2025-02-13T15:25:51.631975721Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:51.632306 kubelet[2845]: I0213 15:25:51.632288 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4" Feb 13 15:25:51.632375 containerd[1605]: time="2025-02-13T15:25:51.632348481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:25:51.632995 containerd[1605]: time="2025-02-13T15:25:51.632700683Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:25:51.632995 containerd[1605]: time="2025-02-13T15:25:51.632868327Z" level=info msg="Ensure that sandbox ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4 in task-service has been cleanup successfully" Feb 13 15:25:51.633134 containerd[1605]: time="2025-02-13T15:25:51.633098590Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:25:51.633134 containerd[1605]: time="2025-02-13T15:25:51.633118457Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:25:51.634615 containerd[1605]: time="2025-02-13T15:25:51.634378664Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:51.634615 containerd[1605]: time="2025-02-13T15:25:51.634500052Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:51.634615 containerd[1605]: time="2025-02-13T15:25:51.634514309Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:51.634893 systemd[1]: run-netns-cni\x2de3d389d7\x2db9cb\x2dc545\x2d3fb5\x2dc22070dcde2c.mount: Deactivated successfully. Feb 13 15:25:51.635070 containerd[1605]: time="2025-02-13T15:25:51.634948614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:2,}" Feb 13 15:25:51.635078 systemd[1]: run-netns-cni\x2d5d47db40\x2dad74\x2d32f8\x2de93f\x2ddb3d63783187.mount: Deactivated successfully. Feb 13 15:25:51.637807 systemd[1]: run-netns-cni\x2db3acdb47\x2d5add\x2d146c\x2defd4\x2d6ad07bfd3df1.mount: Deactivated successfully. Feb 13 15:25:52.405858 containerd[1605]: time="2025-02-13T15:25:52.405685835Z" level=error msg="Failed to destroy network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.406397 containerd[1605]: time="2025-02-13T15:25:52.406373576Z" level=error msg="encountered an error cleaning up failed sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.406515 containerd[1605]: time="2025-02-13T15:25:52.406497809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.408170 kubelet[2845]: E0213 15:25:52.406955 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.408170 kubelet[2845]: E0213 15:25:52.407010 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:52.408170 kubelet[2845]: E0213 15:25:52.407029 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:52.408326 kubelet[2845]: E0213 15:25:52.407082 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:52.408825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc-shm.mount: Deactivated successfully. Feb 13 15:25:52.417242 containerd[1605]: time="2025-02-13T15:25:52.417206150Z" level=error msg="Failed to destroy network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.417802 containerd[1605]: time="2025-02-13T15:25:52.417781741Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.417917 containerd[1605]: time="2025-02-13T15:25:52.417899663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.418236 kubelet[2845]: E0213 15:25:52.418217 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.418341 kubelet[2845]: E0213 15:25:52.418331 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:52.418428 kubelet[2845]: E0213 15:25:52.418419 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:52.418610 kubelet[2845]: E0213 15:25:52.418589 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:52.424321 containerd[1605]: time="2025-02-13T15:25:52.424274795Z" level=error msg="Failed to destroy network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.424820 containerd[1605]: time="2025-02-13T15:25:52.424797387Z" level=error msg="encountered an error cleaning up failed sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.424924 containerd[1605]: time="2025-02-13T15:25:52.424906723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.425302 kubelet[2845]: E0213 15:25:52.425265 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.425393 kubelet[2845]: E0213 15:25:52.425328 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:52.425393 kubelet[2845]: E0213 15:25:52.425348 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:52.425393 kubelet[2845]: E0213 15:25:52.425395 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:52.433963 containerd[1605]: time="2025-02-13T15:25:52.433892157Z" level=error msg="Failed to destroy network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.436579 containerd[1605]: time="2025-02-13T15:25:52.436437909Z" level=error msg="encountered an error cleaning up failed sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.436579 containerd[1605]: time="2025-02-13T15:25:52.436491860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.437072 kubelet[2845]: E0213 15:25:52.436767 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.437072 kubelet[2845]: E0213 15:25:52.436815 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:52.437072 kubelet[2845]: E0213 15:25:52.436833 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:52.437203 kubelet[2845]: E0213 15:25:52.436882 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:52.451445 containerd[1605]: time="2025-02-13T15:25:52.451391302Z" level=error msg="Failed to destroy network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452193 containerd[1605]: time="2025-02-13T15:25:52.452153133Z" level=error msg="Failed to destroy network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452250 containerd[1605]: time="2025-02-13T15:25:52.452214037Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452298 containerd[1605]: time="2025-02-13T15:25:52.452260124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452511 kubelet[2845]: E0213 15:25:52.452478 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452586 kubelet[2845]: E0213 15:25:52.452538 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:52.452586 kubelet[2845]: E0213 15:25:52.452560 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:52.452638 containerd[1605]: time="2025-02-13T15:25:52.452544438Z" level=error msg="encountered an error cleaning up failed sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452638 containerd[1605]: time="2025-02-13T15:25:52.452597177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452782 kubelet[2845]: E0213 15:25:52.452618 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:52.452828 kubelet[2845]: E0213 15:25:52.452810 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.452888 kubelet[2845]: E0213 15:25:52.452867 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:52.453088 kubelet[2845]: E0213 15:25:52.453064 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:52.453135 kubelet[2845]: E0213 15:25:52.453128 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:52.638607 kubelet[2845]: I0213 15:25:52.638576 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b" Feb 13 15:25:52.639395 containerd[1605]: time="2025-02-13T15:25:52.639332077Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:25:52.639656 containerd[1605]: time="2025-02-13T15:25:52.639566828Z" level=info msg="Ensure that sandbox ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b in task-service has been cleanup successfully" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.640164901Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.640183717Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.640535947Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.640655342Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.640669278Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:25:52.641292 containerd[1605]: time="2025-02-13T15:25:52.641182923Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:52.642177 containerd[1605]: time="2025-02-13T15:25:52.641298550Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:52.642177 containerd[1605]: time="2025-02-13T15:25:52.641336692Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:52.642177 containerd[1605]: time="2025-02-13T15:25:52.641774744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:3,}" Feb 13 15:25:52.642262 kubelet[2845]: E0213 15:25:52.641534 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:52.698754 kubelet[2845]: I0213 15:25:52.697911 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e" Feb 13 15:25:52.699003 containerd[1605]: time="2025-02-13T15:25:52.698549040Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:25:52.699003 containerd[1605]: time="2025-02-13T15:25:52.698799620Z" level=info msg="Ensure that sandbox dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e in task-service has been cleanup successfully" Feb 13 15:25:52.699166 containerd[1605]: time="2025-02-13T15:25:52.699105294Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:25:52.699166 containerd[1605]: time="2025-02-13T15:25:52.699124050Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:25:52.699518 containerd[1605]: time="2025-02-13T15:25:52.699501428Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:25:52.699642 containerd[1605]: time="2025-02-13T15:25:52.699619871Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:25:52.699642 containerd[1605]: time="2025-02-13T15:25:52.699635761Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:25:52.700358 containerd[1605]: time="2025-02-13T15:25:52.699807363Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:52.700358 containerd[1605]: time="2025-02-13T15:25:52.699879168Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:52.700358 containerd[1605]: time="2025-02-13T15:25:52.699887513Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:52.700457 kubelet[2845]: I0213 15:25:52.699841 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666" Feb 13 15:25:52.700487 containerd[1605]: time="2025-02-13T15:25:52.700474887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:25:52.700625 containerd[1605]: time="2025-02-13T15:25:52.700580174Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:25:52.700951 containerd[1605]: time="2025-02-13T15:25:52.700795509Z" level=info msg="Ensure that sandbox 54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666 in task-service has been cleanup successfully" Feb 13 15:25:52.701067 containerd[1605]: time="2025-02-13T15:25:52.701044427Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:25:52.701067 containerd[1605]: time="2025-02-13T15:25:52.701060647Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:25:52.702377 containerd[1605]: time="2025-02-13T15:25:52.702337595Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:25:52.702476 containerd[1605]: time="2025-02-13T15:25:52.702457239Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:25:52.702476 containerd[1605]: time="2025-02-13T15:25:52.702470584Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:25:52.706197 containerd[1605]: time="2025-02-13T15:25:52.704935654Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:52.706197 containerd[1605]: time="2025-02-13T15:25:52.705246779Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:52.706197 containerd[1605]: time="2025-02-13T15:25:52.705262128Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:52.706330 kubelet[2845]: I0213 15:25:52.705648 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc" Feb 13 15:25:52.706448 containerd[1605]: time="2025-02-13T15:25:52.706420303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:25:52.706541 containerd[1605]: time="2025-02-13T15:25:52.706521933Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:25:52.706991 containerd[1605]: time="2025-02-13T15:25:52.706968683Z" level=info msg="Ensure that sandbox 38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc in task-service has been cleanup successfully" Feb 13 15:25:52.707248 containerd[1605]: time="2025-02-13T15:25:52.707165893Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:25:52.707248 containerd[1605]: time="2025-02-13T15:25:52.707182414Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:25:52.707566 containerd[1605]: time="2025-02-13T15:25:52.707531830Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:25:52.707663 containerd[1605]: time="2025-02-13T15:25:52.707641886Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:25:52.707663 containerd[1605]: time="2025-02-13T15:25:52.707657566Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:25:52.708051 containerd[1605]: time="2025-02-13T15:25:52.707974090Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:52.708219 containerd[1605]: time="2025-02-13T15:25:52.708159599Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:52.708219 containerd[1605]: time="2025-02-13T15:25:52.708170430Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:52.708760 kubelet[2845]: I0213 15:25:52.708738 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a" Feb 13 15:25:52.709662 containerd[1605]: time="2025-02-13T15:25:52.708956806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:3,}" Feb 13 15:25:52.709662 containerd[1605]: time="2025-02-13T15:25:52.709411620Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:25:52.710260 containerd[1605]: time="2025-02-13T15:25:52.710241029Z" level=info msg="Ensure that sandbox ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a in task-service has been cleanup successfully" Feb 13 15:25:52.710959 containerd[1605]: time="2025-02-13T15:25:52.710927948Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:25:52.710959 containerd[1605]: time="2025-02-13T15:25:52.710954788Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:25:52.725803 kubelet[2845]: I0213 15:25:52.723748 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f" Feb 13 15:25:52.736328 containerd[1605]: time="2025-02-13T15:25:52.735115765Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:25:52.736464 containerd[1605]: time="2025-02-13T15:25:52.736436085Z" level=info msg="Ensure that sandbox f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f in task-service has been cleanup successfully" Feb 13 15:25:52.736774 containerd[1605]: time="2025-02-13T15:25:52.736663531Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:25:52.736822 containerd[1605]: time="2025-02-13T15:25:52.736772275Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:25:52.736920 containerd[1605]: time="2025-02-13T15:25:52.736900286Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:25:52.737122 containerd[1605]: time="2025-02-13T15:25:52.737094601Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:25:52.737122 containerd[1605]: time="2025-02-13T15:25:52.737115600Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.752855852Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.752917137Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753067239Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753092767Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753111262Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753130738Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753468733Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753584640Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:52.753621 containerd[1605]: time="2025-02-13T15:25:52.753594669Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:52.753962 kubelet[2845]: E0213 15:25:52.753393 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:52.755016 containerd[1605]: time="2025-02-13T15:25:52.754995951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:3,}" Feb 13 15:25:52.755375 containerd[1605]: time="2025-02-13T15:25:52.755288551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:3,}" Feb 13 15:25:52.818873 containerd[1605]: time="2025-02-13T15:25:52.818818213Z" level=error msg="Failed to destroy network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.841383 containerd[1605]: time="2025-02-13T15:25:52.841306117Z" level=error msg="encountered an error cleaning up failed sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.841554 containerd[1605]: time="2025-02-13T15:25:52.841412907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.841753 kubelet[2845]: E0213 15:25:52.841713 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:52.841886 kubelet[2845]: E0213 15:25:52.841789 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:52.841886 kubelet[2845]: E0213 15:25:52.841816 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:52.841979 kubelet[2845]: E0213 15:25:52.841888 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:53.144274 containerd[1605]: time="2025-02-13T15:25:53.144062138Z" level=error msg="Failed to destroy network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.145119 containerd[1605]: time="2025-02-13T15:25:53.145093955Z" level=error msg="encountered an error cleaning up failed sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.145778 containerd[1605]: time="2025-02-13T15:25:53.145510477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.146776 kubelet[2845]: E0213 15:25:53.146424 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.146776 kubelet[2845]: E0213 15:25:53.146478 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:53.146776 kubelet[2845]: E0213 15:25:53.146500 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:53.146979 kubelet[2845]: E0213 15:25:53.146553 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:53.154248 containerd[1605]: time="2025-02-13T15:25:53.154090410Z" level=error msg="Failed to destroy network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.155945 containerd[1605]: time="2025-02-13T15:25:53.154632978Z" level=error msg="encountered an error cleaning up failed sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.155945 containerd[1605]: time="2025-02-13T15:25:53.154700456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.156111 kubelet[2845]: E0213 15:25:53.154940 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.156111 kubelet[2845]: E0213 15:25:53.155003 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:53.156111 kubelet[2845]: E0213 15:25:53.155029 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:53.156890 kubelet[2845]: E0213 15:25:53.155107 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:53.157496 containerd[1605]: time="2025-02-13T15:25:53.157456331Z" level=error msg="Failed to destroy network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.157907 containerd[1605]: time="2025-02-13T15:25:53.157882471Z" level=error msg="encountered an error cleaning up failed sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.157967 containerd[1605]: time="2025-02-13T15:25:53.157951921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.158413 kubelet[2845]: E0213 15:25:53.158370 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.158652 kubelet[2845]: E0213 15:25:53.158638 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:53.158768 kubelet[2845]: E0213 15:25:53.158746 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:53.158852 kubelet[2845]: E0213 15:25:53.158838 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:53.172173 containerd[1605]: time="2025-02-13T15:25:53.171906586Z" level=error msg="Failed to destroy network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.175547 containerd[1605]: time="2025-02-13T15:25:53.175196345Z" level=error msg="encountered an error cleaning up failed sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.175547 containerd[1605]: time="2025-02-13T15:25:53.175270985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.176664 kubelet[2845]: E0213 15:25:53.176316 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.176664 kubelet[2845]: E0213 15:25:53.176382 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:53.176664 kubelet[2845]: E0213 15:25:53.176407 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:53.176878 kubelet[2845]: E0213 15:25:53.176467 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:53.192989 containerd[1605]: time="2025-02-13T15:25:53.192894189Z" level=error msg="Failed to destroy network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.193398 containerd[1605]: time="2025-02-13T15:25:53.193361607Z" level=error msg="encountered an error cleaning up failed sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.193456 containerd[1605]: time="2025-02-13T15:25:53.193419716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.193742 kubelet[2845]: E0213 15:25:53.193701 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:53.193892 kubelet[2845]: E0213 15:25:53.193771 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:53.193892 kubelet[2845]: E0213 15:25:53.193797 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:53.193892 kubelet[2845]: E0213 15:25:53.193856 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:53.211442 systemd[1]: run-netns-cni\x2d19053000\x2df7bf\x2da46c\x2df597\x2ddb3df1588106.mount: Deactivated successfully. Feb 13 15:25:53.211684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f-shm.mount: Deactivated successfully. Feb 13 15:25:53.211836 systemd[1]: run-netns-cni\x2d4b31a47b\x2dc3ec\x2de8e1\x2da5a8\x2d9425463fae57.mount: Deactivated successfully. Feb 13 15:25:53.211981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b-shm.mount: Deactivated successfully. Feb 13 15:25:53.212170 systemd[1]: run-netns-cni\x2db2501fc9\x2d2d5e\x2d66e5\x2dbf8f\x2de649b89e68e5.mount: Deactivated successfully. Feb 13 15:25:53.212304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a-shm.mount: Deactivated successfully. Feb 13 15:25:53.212446 systemd[1]: run-netns-cni\x2d7d51d283\x2dadc8\x2ddc9c\x2d002a\x2d2e93b743910f.mount: Deactivated successfully. Feb 13 15:25:53.212582 systemd[1]: run-netns-cni\x2d7456cefe\x2dff15\x2dce97\x2d93d4\x2d84e27dbdddb2.mount: Deactivated successfully. Feb 13 15:25:53.212714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666-shm.mount: Deactivated successfully. Feb 13 15:25:53.212853 systemd[1]: run-netns-cni\x2d46a33e4d\x2d6297\x2d7542\x2d6122\x2d1d0c2ea0d3cc.mount: Deactivated successfully. Feb 13 15:25:53.213009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e-shm.mount: Deactivated successfully. Feb 13 15:25:53.727534 kubelet[2845]: I0213 15:25:53.727496 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24" Feb 13 15:25:53.729322 containerd[1605]: time="2025-02-13T15:25:53.729288731Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:25:53.729628 containerd[1605]: time="2025-02-13T15:25:53.729502433Z" level=info msg="Ensure that sandbox 5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24 in task-service has been cleanup successfully" Feb 13 15:25:53.734201 containerd[1605]: time="2025-02-13T15:25:53.734177733Z" level=info msg="TearDown network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" successfully" Feb 13 15:25:53.734264 containerd[1605]: time="2025-02-13T15:25:53.734200656Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" returns successfully" Feb 13 15:25:53.734416 kubelet[2845]: I0213 15:25:53.734396 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286" Feb 13 15:25:53.734904 containerd[1605]: time="2025-02-13T15:25:53.734852790Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:25:53.734977 containerd[1605]: time="2025-02-13T15:25:53.734961805Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:25:53.735004 containerd[1605]: time="2025-02-13T15:25:53.734978116Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:25:53.735582 containerd[1605]: time="2025-02-13T15:25:53.735437548Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:25:53.735582 containerd[1605]: time="2025-02-13T15:25:53.735490948Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:25:53.735582 containerd[1605]: time="2025-02-13T15:25:53.735552584Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:25:53.735582 containerd[1605]: time="2025-02-13T15:25:53.735566009Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:25:53.735699 containerd[1605]: time="2025-02-13T15:25:53.735634868Z" level=info msg="Ensure that sandbox 77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286 in task-service has been cleanup successfully" Feb 13 15:25:53.736049 containerd[1605]: time="2025-02-13T15:25:53.735788257Z" level=info msg="TearDown network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" successfully" Feb 13 15:25:53.736049 containerd[1605]: time="2025-02-13T15:25:53.735801362Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" returns successfully" Feb 13 15:25:53.736255 containerd[1605]: time="2025-02-13T15:25:53.736230858Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:53.736450 containerd[1605]: time="2025-02-13T15:25:53.736431414Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:25:53.736525 containerd[1605]: time="2025-02-13T15:25:53.736510523Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:25:53.736563 containerd[1605]: time="2025-02-13T15:25:53.736523167Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:25:53.736563 containerd[1605]: time="2025-02-13T15:25:53.736523468Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:53.736563 containerd[1605]: time="2025-02-13T15:25:53.736543996Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:53.736806 systemd[1]: run-netns-cni\x2d4480517a\x2d7b45\x2d998a\x2dce65\x2dd43eb378beed.mount: Deactivated successfully. Feb 13 15:25:53.738986 containerd[1605]: time="2025-02-13T15:25:53.738757362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:25:53.738986 containerd[1605]: time="2025-02-13T15:25:53.738827475Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:25:53.738986 containerd[1605]: time="2025-02-13T15:25:53.738898788Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:25:53.738986 containerd[1605]: time="2025-02-13T15:25:53.738907264Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:25:53.739245 containerd[1605]: time="2025-02-13T15:25:53.739225762Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:53.739370 containerd[1605]: time="2025-02-13T15:25:53.739290474Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:53.739370 containerd[1605]: time="2025-02-13T15:25:53.739298278Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:53.739789 containerd[1605]: time="2025-02-13T15:25:53.739745017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:4,}" Feb 13 15:25:53.739901 kubelet[2845]: I0213 15:25:53.739882 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a" Feb 13 15:25:53.741694 systemd[1]: run-netns-cni\x2d9c8ebf46\x2db75c\x2dd6d5\x2d73ed\x2d0922d9b38bd6.mount: Deactivated successfully. Feb 13 15:25:53.742947 containerd[1605]: time="2025-02-13T15:25:53.742914189Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:25:53.743242 containerd[1605]: time="2025-02-13T15:25:53.743209924Z" level=info msg="Ensure that sandbox ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a in task-service has been cleanup successfully" Feb 13 15:25:53.743692 containerd[1605]: time="2025-02-13T15:25:53.743556515Z" level=info msg="TearDown network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" successfully" Feb 13 15:25:53.743692 containerd[1605]: time="2025-02-13T15:25:53.743611909Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" returns successfully" Feb 13 15:25:53.744721 containerd[1605]: time="2025-02-13T15:25:53.744436788Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:25:53.744721 containerd[1605]: time="2025-02-13T15:25:53.744507541Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:25:53.744721 containerd[1605]: time="2025-02-13T15:25:53.744525264Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:25:53.745160 containerd[1605]: time="2025-02-13T15:25:53.745127054Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:25:53.745552 containerd[1605]: time="2025-02-13T15:25:53.745537726Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:25:53.745658 containerd[1605]: time="2025-02-13T15:25:53.745628265Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:25:53.746089 kubelet[2845]: I0213 15:25:53.745980 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f" Feb 13 15:25:53.746813 systemd[1]: run-netns-cni\x2d081f057a\x2d513b\x2d1a5e\x2db8ed\x2d7583c62927f0.mount: Deactivated successfully. Feb 13 15:25:53.752062 containerd[1605]: time="2025-02-13T15:25:53.752038052Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752175390Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752230143Z" level=info msg="Ensure that sandbox 654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f in task-service has been cleanup successfully" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752270759Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752284375Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752382980Z" level=info msg="TearDown network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" successfully" Feb 13 15:25:53.753813 containerd[1605]: time="2025-02-13T15:25:53.752392949Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" returns successfully" Feb 13 15:25:53.755125 containerd[1605]: time="2025-02-13T15:25:53.754390309Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:25:53.755125 containerd[1605]: time="2025-02-13T15:25:53.754462775Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:25:53.755125 containerd[1605]: time="2025-02-13T15:25:53.754476521Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:25:53.755125 containerd[1605]: time="2025-02-13T15:25:53.754536383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:4,}" Feb 13 15:25:53.755223 kubelet[2845]: E0213 15:25:53.754160 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:53.756633 containerd[1605]: time="2025-02-13T15:25:53.756436532Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:25:53.756633 containerd[1605]: time="2025-02-13T15:25:53.756505000Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:25:53.756633 containerd[1605]: time="2025-02-13T15:25:53.756513035Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:25:53.756633 systemd[1]: run-netns-cni\x2dcc44a3ab\x2d9dd6\x2d9b15\x2dba82\x2d030d9e27ff65.mount: Deactivated successfully. Feb 13 15:25:53.756816 containerd[1605]: time="2025-02-13T15:25:53.756786649Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:53.757521 containerd[1605]: time="2025-02-13T15:25:53.757109967Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:53.757521 containerd[1605]: time="2025-02-13T15:25:53.757128832Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:53.757679 kubelet[2845]: I0213 15:25:53.757650 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96" Feb 13 15:25:53.758071 containerd[1605]: time="2025-02-13T15:25:53.758043670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:4,}" Feb 13 15:25:53.758364 containerd[1605]: time="2025-02-13T15:25:53.758342180Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:25:53.758552 containerd[1605]: time="2025-02-13T15:25:53.758531176Z" level=info msg="Ensure that sandbox b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96 in task-service has been cleanup successfully" Feb 13 15:25:53.758714 containerd[1605]: time="2025-02-13T15:25:53.758697287Z" level=info msg="TearDown network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" successfully" Feb 13 15:25:53.758714 containerd[1605]: time="2025-02-13T15:25:53.758712366Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" returns successfully" Feb 13 15:25:53.759755 containerd[1605]: time="2025-02-13T15:25:53.759725849Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:25:53.759856 containerd[1605]: time="2025-02-13T15:25:53.759832830Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:25:53.759931 containerd[1605]: time="2025-02-13T15:25:53.759855022Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:25:53.760242 containerd[1605]: time="2025-02-13T15:25:53.760120410Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:25:53.760242 containerd[1605]: time="2025-02-13T15:25:53.760231499Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:25:53.760335 containerd[1605]: time="2025-02-13T15:25:53.760243772Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:25:53.760650 containerd[1605]: time="2025-02-13T15:25:53.760490866Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:53.760650 containerd[1605]: time="2025-02-13T15:25:53.760583760Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:53.760650 containerd[1605]: time="2025-02-13T15:25:53.760593238Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:53.761596 containerd[1605]: time="2025-02-13T15:25:53.761571555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:25:53.768338 kubelet[2845]: I0213 15:25:53.768267 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953" Feb 13 15:25:53.769539 containerd[1605]: time="2025-02-13T15:25:53.769412760Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:25:53.769657 containerd[1605]: time="2025-02-13T15:25:53.769637182Z" level=info msg="Ensure that sandbox 52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953 in task-service has been cleanup successfully" Feb 13 15:25:53.769860 containerd[1605]: time="2025-02-13T15:25:53.769835043Z" level=info msg="TearDown network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" successfully" Feb 13 15:25:53.769860 containerd[1605]: time="2025-02-13T15:25:53.769851644Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" returns successfully" Feb 13 15:25:53.770183 containerd[1605]: time="2025-02-13T15:25:53.770162568Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:25:53.970303 containerd[1605]: time="2025-02-13T15:25:53.770247086Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:25:53.970462 containerd[1605]: time="2025-02-13T15:25:53.970297090Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:25:53.971136 containerd[1605]: time="2025-02-13T15:25:53.971116007Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:25:53.983725 containerd[1605]: time="2025-02-13T15:25:53.971482967Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:25:53.983846 containerd[1605]: time="2025-02-13T15:25:53.983821216Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:25:53.984389 containerd[1605]: time="2025-02-13T15:25:53.984218632Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:53.984389 containerd[1605]: time="2025-02-13T15:25:53.984311947Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:53.984389 containerd[1605]: time="2025-02-13T15:25:53.984331515Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:53.984656 kubelet[2845]: E0213 15:25:53.984634 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:53.985190 containerd[1605]: time="2025-02-13T15:25:53.984980704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:4,}" Feb 13 15:25:54.097161 containerd[1605]: time="2025-02-13T15:25:54.096376568Z" level=error msg="Failed to destroy network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.097429 containerd[1605]: time="2025-02-13T15:25:54.097320150Z" level=error msg="encountered an error cleaning up failed sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.097429 containerd[1605]: time="2025-02-13T15:25:54.097372838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.099498 kubelet[2845]: E0213 15:25:54.097658 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.099498 kubelet[2845]: E0213 15:25:54.097717 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:54.099498 kubelet[2845]: E0213 15:25:54.097738 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:54.099677 kubelet[2845]: E0213 15:25:54.097787 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:54.118429 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:36382.service - OpenSSH per-connection server daemon (10.0.0.1:36382). Feb 13 15:25:54.205082 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 36382 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:25:54.208794 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:54.214035 systemd[1]: run-netns-cni\x2df1678320\x2de405\x2de029\x2dce12\x2dee4a7850f1fd.mount: Deactivated successfully. Feb 13 15:25:54.214252 systemd[1]: run-netns-cni\x2d52f3d317\x2dcdac\x2d275d\x2da70d\x2d39ea4b5600d5.mount: Deactivated successfully. Feb 13 15:25:54.225800 systemd-logind[1588]: New session 8 of user core. Feb 13 15:25:54.230406 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:25:54.251541 containerd[1605]: time="2025-02-13T15:25:54.251412816Z" level=error msg="Failed to destroy network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.324424 containerd[1605]: time="2025-02-13T15:25:54.252621405Z" level=error msg="encountered an error cleaning up failed sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.324424 containerd[1605]: time="2025-02-13T15:25:54.252713568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.255931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d-shm.mount: Deactivated successfully. Feb 13 15:25:54.324839 kubelet[2845]: E0213 15:25:54.253024 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.324839 kubelet[2845]: E0213 15:25:54.253091 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:54.324839 kubelet[2845]: E0213 15:25:54.253120 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:54.325013 kubelet[2845]: E0213 15:25:54.253208 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:54.566479 sshd[4589]: Connection closed by 10.0.0.1 port 36382 Feb 13 15:25:54.566820 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:54.570560 systemd-logind[1588]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:25:54.573252 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:36382.service: Deactivated successfully. Feb 13 15:25:54.576945 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:25:54.579117 systemd-logind[1588]: Removed session 8. Feb 13 15:25:54.619280 containerd[1605]: time="2025-02-13T15:25:54.619234487Z" level=error msg="Failed to destroy network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.620154 containerd[1605]: time="2025-02-13T15:25:54.619978434Z" level=error msg="encountered an error cleaning up failed sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.620154 containerd[1605]: time="2025-02-13T15:25:54.620057532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.621697 containerd[1605]: time="2025-02-13T15:25:54.621660513Z" level=error msg="Failed to destroy network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.622854 kubelet[2845]: E0213 15:25:54.622339 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.622854 kubelet[2845]: E0213 15:25:54.622400 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:54.622854 kubelet[2845]: E0213 15:25:54.622421 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:54.622995 containerd[1605]: time="2025-02-13T15:25:54.622342063Z" level=error msg="encountered an error cleaning up failed sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.622995 containerd[1605]: time="2025-02-13T15:25:54.622526148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.623083 kubelet[2845]: E0213 15:25:54.622486 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:54.623083 kubelet[2845]: E0213 15:25:54.622911 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.623083 kubelet[2845]: E0213 15:25:54.622973 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:54.623218 kubelet[2845]: E0213 15:25:54.622993 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:54.623218 kubelet[2845]: E0213 15:25:54.623044 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:54.637371 containerd[1605]: time="2025-02-13T15:25:54.636756329Z" level=error msg="Failed to destroy network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.637371 containerd[1605]: time="2025-02-13T15:25:54.637219839Z" level=error msg="encountered an error cleaning up failed sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.637371 containerd[1605]: time="2025-02-13T15:25:54.637269973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.637548 kubelet[2845]: E0213 15:25:54.637488 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.637548 kubelet[2845]: E0213 15:25:54.637536 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:54.637633 kubelet[2845]: E0213 15:25:54.637563 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:54.637633 kubelet[2845]: E0213 15:25:54.637613 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:54.639279 containerd[1605]: time="2025-02-13T15:25:54.639232258Z" level=error msg="Failed to destroy network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.639671 containerd[1605]: time="2025-02-13T15:25:54.639635094Z" level=error msg="encountered an error cleaning up failed sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.639726 containerd[1605]: time="2025-02-13T15:25:54.639703292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.639892 kubelet[2845]: E0213 15:25:54.639870 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.639961 kubelet[2845]: E0213 15:25:54.639911 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:54.639961 kubelet[2845]: E0213 15:25:54.639929 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:54.640012 kubelet[2845]: E0213 15:25:54.639971 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:54.771721 kubelet[2845]: I0213 15:25:54.771688 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375" Feb 13 15:25:54.772757 containerd[1605]: time="2025-02-13T15:25:54.772512694Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" Feb 13 15:25:54.772757 containerd[1605]: time="2025-02-13T15:25:54.772749528Z" level=info msg="Ensure that sandbox 379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375 in task-service has been cleanup successfully" Feb 13 15:25:54.773386 containerd[1605]: time="2025-02-13T15:25:54.772957038Z" level=info msg="TearDown network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" successfully" Feb 13 15:25:54.773386 containerd[1605]: time="2025-02-13T15:25:54.772974040Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" returns successfully" Feb 13 15:25:54.773444 containerd[1605]: time="2025-02-13T15:25:54.773422432Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:25:54.773530 containerd[1605]: time="2025-02-13T15:25:54.773507842Z" level=info msg="TearDown network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" successfully" Feb 13 15:25:54.773561 containerd[1605]: time="2025-02-13T15:25:54.773527469Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" returns successfully" Feb 13 15:25:54.773840 containerd[1605]: time="2025-02-13T15:25:54.773807254Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:25:54.773967 containerd[1605]: time="2025-02-13T15:25:54.773933691Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:25:54.773967 containerd[1605]: time="2025-02-13T15:25:54.773946656Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:25:54.774371 containerd[1605]: time="2025-02-13T15:25:54.774213036Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:25:54.774371 containerd[1605]: time="2025-02-13T15:25:54.774299048Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:25:54.774371 containerd[1605]: time="2025-02-13T15:25:54.774309638Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:25:54.774745 containerd[1605]: time="2025-02-13T15:25:54.774600354Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:54.774745 containerd[1605]: time="2025-02-13T15:25:54.774680544Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:54.774745 containerd[1605]: time="2025-02-13T15:25:54.774688800Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:54.774933 kubelet[2845]: E0213 15:25:54.774912 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:54.775230 containerd[1605]: time="2025-02-13T15:25:54.775205540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:5,}" Feb 13 15:25:54.775283 kubelet[2845]: I0213 15:25:54.775224 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc" Feb 13 15:25:54.775788 containerd[1605]: time="2025-02-13T15:25:54.775757606Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" Feb 13 15:25:54.775938 containerd[1605]: time="2025-02-13T15:25:54.775923127Z" level=info msg="Ensure that sandbox a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc in task-service has been cleanup successfully" Feb 13 15:25:54.776093 containerd[1605]: time="2025-02-13T15:25:54.776069863Z" level=info msg="TearDown network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" successfully" Feb 13 15:25:54.776093 containerd[1605]: time="2025-02-13T15:25:54.776084911Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" returns successfully" Feb 13 15:25:54.776506 containerd[1605]: time="2025-02-13T15:25:54.776484261Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:25:54.776577 containerd[1605]: time="2025-02-13T15:25:54.776556176Z" level=info msg="TearDown network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" successfully" Feb 13 15:25:54.776577 containerd[1605]: time="2025-02-13T15:25:54.776565383Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" returns successfully" Feb 13 15:25:54.776977 containerd[1605]: time="2025-02-13T15:25:54.776873652Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:25:54.776977 containerd[1605]: time="2025-02-13T15:25:54.776949975Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:25:54.776977 containerd[1605]: time="2025-02-13T15:25:54.776958321Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:25:54.777475 containerd[1605]: time="2025-02-13T15:25:54.777295484Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:25:54.777812 containerd[1605]: time="2025-02-13T15:25:54.777366878Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:25:54.777857 containerd[1605]: time="2025-02-13T15:25:54.777805271Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:25:54.778263 containerd[1605]: time="2025-02-13T15:25:54.778240057Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:54.778321 containerd[1605]: time="2025-02-13T15:25:54.778309077Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:54.778353 containerd[1605]: time="2025-02-13T15:25:54.778317783Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:54.778954 containerd[1605]: time="2025-02-13T15:25:54.778934872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:5,}" Feb 13 15:25:54.779031 kubelet[2845]: I0213 15:25:54.779002 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284" Feb 13 15:25:54.780306 containerd[1605]: time="2025-02-13T15:25:54.780256534Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" Feb 13 15:25:54.780502 containerd[1605]: time="2025-02-13T15:25:54.780447733Z" level=info msg="Ensure that sandbox 6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284 in task-service has been cleanup successfully" Feb 13 15:25:54.780689 containerd[1605]: time="2025-02-13T15:25:54.780666824Z" level=info msg="TearDown network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" successfully" Feb 13 15:25:54.780746 containerd[1605]: time="2025-02-13T15:25:54.780725705Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" returns successfully" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.781526779Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.781615926Z" level=info msg="TearDown network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" successfully" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.781630063Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" returns successfully" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.781942029Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.782024203Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:25:54.782089 containerd[1605]: time="2025-02-13T15:25:54.782037668Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782494746Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782605325Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782618970Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782806833Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782882465Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.782891982Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:54.784036 containerd[1605]: time="2025-02-13T15:25:54.783352778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:5,}" Feb 13 15:25:54.784216 kubelet[2845]: E0213 15:25:54.783096 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:54.784216 kubelet[2845]: I0213 15:25:54.783552 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c" Feb 13 15:25:54.784308 containerd[1605]: time="2025-02-13T15:25:54.784093899Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" Feb 13 15:25:54.784308 containerd[1605]: time="2025-02-13T15:25:54.784263639Z" level=info msg="Ensure that sandbox 0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c in task-service has been cleanup successfully" Feb 13 15:25:54.784455 containerd[1605]: time="2025-02-13T15:25:54.784437926Z" level=info msg="TearDown network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" successfully" Feb 13 15:25:54.784579 containerd[1605]: time="2025-02-13T15:25:54.784452403Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" returns successfully" Feb 13 15:25:54.784763 containerd[1605]: time="2025-02-13T15:25:54.784735976Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:25:54.784935 containerd[1605]: time="2025-02-13T15:25:54.784819031Z" level=info msg="TearDown network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" successfully" Feb 13 15:25:54.784935 containerd[1605]: time="2025-02-13T15:25:54.784829000Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" returns successfully" Feb 13 15:25:54.785260 containerd[1605]: time="2025-02-13T15:25:54.785129585Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:25:54.785386 containerd[1605]: time="2025-02-13T15:25:54.785332545Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:25:54.785386 containerd[1605]: time="2025-02-13T15:25:54.785347794Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:25:54.785536 containerd[1605]: time="2025-02-13T15:25:54.785516581Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:25:54.785627 containerd[1605]: time="2025-02-13T15:25:54.785591422Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:25:54.785655 containerd[1605]: time="2025-02-13T15:25:54.785627049Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:25:54.785885 kubelet[2845]: I0213 15:25:54.785849 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e" Feb 13 15:25:54.785995 containerd[1605]: time="2025-02-13T15:25:54.785964462Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:54.786080 containerd[1605]: time="2025-02-13T15:25:54.786060272Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:54.786080 containerd[1605]: time="2025-02-13T15:25:54.786077434Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:54.786261 containerd[1605]: time="2025-02-13T15:25:54.786240941Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" Feb 13 15:25:54.786436 containerd[1605]: time="2025-02-13T15:25:54.786377588Z" level=info msg="Ensure that sandbox 7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e in task-service has been cleanup successfully" Feb 13 15:25:54.786473 containerd[1605]: time="2025-02-13T15:25:54.786434515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:25:54.786688 containerd[1605]: time="2025-02-13T15:25:54.786664847Z" level=info msg="TearDown network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" successfully" Feb 13 15:25:54.786688 containerd[1605]: time="2025-02-13T15:25:54.786682781Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" returns successfully" Feb 13 15:25:54.786961 containerd[1605]: time="2025-02-13T15:25:54.786856217Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:25:54.786961 containerd[1605]: time="2025-02-13T15:25:54.786937159Z" level=info msg="TearDown network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" successfully" Feb 13 15:25:54.786961 containerd[1605]: time="2025-02-13T15:25:54.786947608Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" returns successfully" Feb 13 15:25:54.787393 containerd[1605]: time="2025-02-13T15:25:54.787365733Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:25:54.787473 containerd[1605]: time="2025-02-13T15:25:54.787445012Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:25:54.787473 containerd[1605]: time="2025-02-13T15:25:54.787459910Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:25:54.787740 containerd[1605]: time="2025-02-13T15:25:54.787715130Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:25:54.787812 containerd[1605]: time="2025-02-13T15:25:54.787794378Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:25:54.787812 containerd[1605]: time="2025-02-13T15:25:54.787807613Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:25:54.787985 kubelet[2845]: I0213 15:25:54.787968 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788303043Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788306590Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788426424Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788437226Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788449609Z" level=info msg="Ensure that sandbox 8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d in task-service has been cleanup successfully" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788596254Z" level=info msg="TearDown network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" successfully" Feb 13 15:25:54.788636 containerd[1605]: time="2025-02-13T15:25:54.788606663Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" returns successfully" Feb 13 15:25:54.791326 containerd[1605]: time="2025-02-13T15:25:54.791280043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:25:54.791326 containerd[1605]: time="2025-02-13T15:25:54.791311692Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:25:54.791426 containerd[1605]: time="2025-02-13T15:25:54.791396763Z" level=info msg="TearDown network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" successfully" Feb 13 15:25:54.791426 containerd[1605]: time="2025-02-13T15:25:54.791405740Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" returns successfully" Feb 13 15:25:54.792645 containerd[1605]: time="2025-02-13T15:25:54.792601745Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:25:54.792730 containerd[1605]: time="2025-02-13T15:25:54.792709868Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:25:54.792730 containerd[1605]: time="2025-02-13T15:25:54.792727351Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:25:54.793477 containerd[1605]: time="2025-02-13T15:25:54.793375127Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:25:54.793477 containerd[1605]: time="2025-02-13T15:25:54.793448976Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:25:54.793477 containerd[1605]: time="2025-02-13T15:25:54.793457542Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:25:54.793757 containerd[1605]: time="2025-02-13T15:25:54.793740153Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:54.793839 containerd[1605]: time="2025-02-13T15:25:54.793823238Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:54.793873 containerd[1605]: time="2025-02-13T15:25:54.793839469Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:54.794192 containerd[1605]: time="2025-02-13T15:25:54.794175901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:5,}" Feb 13 15:25:54.926451 containerd[1605]: time="2025-02-13T15:25:54.926392448Z" level=error msg="Failed to destroy network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.941575 containerd[1605]: time="2025-02-13T15:25:54.941290644Z" level=error msg="Failed to destroy network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.941859 containerd[1605]: time="2025-02-13T15:25:54.941827301Z" level=error msg="encountered an error cleaning up failed sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.941935 containerd[1605]: time="2025-02-13T15:25:54.941909095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.943251 kubelet[2845]: E0213 15:25:54.942223 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.943251 kubelet[2845]: E0213 15:25:54.942286 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:54.943251 kubelet[2845]: E0213 15:25:54.942308 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cc6ff" Feb 13 15:25:54.943437 kubelet[2845]: E0213 15:25:54.942370 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cc6ff_kube-system(89340465-825b-41b6-aad8-3f7e90dab570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cc6ff" podUID="89340465-825b-41b6-aad8-3f7e90dab570" Feb 13 15:25:54.946716 containerd[1605]: time="2025-02-13T15:25:54.946670336Z" level=error msg="encountered an error cleaning up failed sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.947587 containerd[1605]: time="2025-02-13T15:25:54.946743042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.947649 kubelet[2845]: E0213 15:25:54.947025 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:54.947649 kubelet[2845]: E0213 15:25:54.947078 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:54.947649 kubelet[2845]: E0213 15:25:54.947105 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-b899m" Feb 13 15:25:54.947794 kubelet[2845]: E0213 15:25:54.947197 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b899m_kube-system(6773e7e4-f620-4ef0-9bbb-a553c14f7656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-b899m" podUID="6773e7e4-f620-4ef0-9bbb-a553c14f7656" Feb 13 15:25:55.007696 containerd[1605]: time="2025-02-13T15:25:55.007379089Z" level=error msg="Failed to destroy network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.009516 containerd[1605]: time="2025-02-13T15:25:55.009492437Z" level=error msg="Failed to destroy network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.023560 containerd[1605]: time="2025-02-13T15:25:55.023492243Z" level=error msg="Failed to destroy network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.024189 containerd[1605]: time="2025-02-13T15:25:55.024128067Z" level=error msg="encountered an error cleaning up failed sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.024280 containerd[1605]: time="2025-02-13T15:25:55.024243804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.024598 kubelet[2845]: E0213 15:25:55.024545 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.024598 kubelet[2845]: E0213 15:25:55.024603 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:55.024815 kubelet[2845]: E0213 15:25:55.024627 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" Feb 13 15:25:55.024815 kubelet[2845]: E0213 15:25:55.024683 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-p7rtt_calico-apiserver(77fafed8-e30a-4324-8899-dc18d6f5bcb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podUID="77fafed8-e30a-4324-8899-dc18d6f5bcb9" Feb 13 15:25:55.029729 containerd[1605]: time="2025-02-13T15:25:55.029665464Z" level=error msg="Failed to destroy network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174339682Z" level=error msg="encountered an error cleaning up failed sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174459898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174654854Z" level=error msg="encountered an error cleaning up failed sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174692785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174808111Z" level=error msg="encountered an error cleaning up failed sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.174960 containerd[1605]: time="2025-02-13T15:25:55.174840693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.175409 kubelet[2845]: E0213 15:25:55.175029 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.175409 kubelet[2845]: E0213 15:25:55.175079 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:55.175409 kubelet[2845]: E0213 15:25:55.175099 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" Feb 13 15:25:55.175529 kubelet[2845]: E0213 15:25:55.175161 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cfc58b57b-pxnbk_calico-apiserver(84ad88b0-6adb-4b60-9716-75d388d2367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podUID="84ad88b0-6adb-4b60-9716-75d388d2367c" Feb 13 15:25:55.175529 kubelet[2845]: E0213 15:25:55.175332 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.175529 kubelet[2845]: E0213 15:25:55.175350 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:55.175647 kubelet[2845]: E0213 15:25:55.175366 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hctxf" Feb 13 15:25:55.175647 kubelet[2845]: E0213 15:25:55.175400 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hctxf_calico-system(f4c1f8e5-b232-41b8-a095-6576678dbe57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hctxf" podUID="f4c1f8e5-b232-41b8-a095-6576678dbe57" Feb 13 15:25:55.175647 kubelet[2845]: E0213 15:25:55.175432 2845 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:25:55.175776 kubelet[2845]: E0213 15:25:55.175486 2845 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:55.175776 kubelet[2845]: E0213 15:25:55.175507 2845 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" Feb 13 15:25:55.175776 kubelet[2845]: E0213 15:25:55.175545 2845 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b8d49698-qwwkp_calico-system(986d0026-8722-498a-9efe-05e1624276c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podUID="986d0026-8722-498a-9efe-05e1624276c3" Feb 13 15:25:55.213764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284-shm.mount: Deactivated successfully. Feb 13 15:25:55.214012 systemd[1]: run-netns-cni\x2df3455cdb\x2d52ed\x2d0b64\x2d1dc8\x2d6871876d1441.mount: Deactivated successfully. Feb 13 15:25:55.214196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375-shm.mount: Deactivated successfully. Feb 13 15:25:55.214376 systemd[1]: run-netns-cni\x2d225cdaa6\x2d6aab\x2d4e37\x2d71ab\x2dea86cec4d844.mount: Deactivated successfully. Feb 13 15:25:55.214543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc-shm.mount: Deactivated successfully. Feb 13 15:25:55.214721 systemd[1]: run-netns-cni\x2df24f95a6\x2dd25c\x2d4286\x2d5809\x2dbfa4e564e5f1.mount: Deactivated successfully. Feb 13 15:25:55.214909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e-shm.mount: Deactivated successfully. Feb 13 15:25:55.215092 systemd[1]: run-netns-cni\x2db9d82b5b\x2d25ce\x2dad69\x2d9918\x2d536438895b52.mount: Deactivated successfully. Feb 13 15:25:55.215279 systemd[1]: run-netns-cni\x2d42c6a82e\x2d0442\x2d82c8\x2d779c\x2d86831b82a93c.mount: Deactivated successfully. Feb 13 15:25:55.258475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686650134.mount: Deactivated successfully. Feb 13 15:25:55.295412 containerd[1605]: time="2025-02-13T15:25:55.295365008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:55.296168 containerd[1605]: time="2025-02-13T15:25:55.296096501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:25:55.300245 containerd[1605]: time="2025-02-13T15:25:55.300212029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.692119653s" Feb 13 15:25:55.300300 containerd[1605]: time="2025-02-13T15:25:55.300245402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:25:55.307767 containerd[1605]: time="2025-02-13T15:25:55.307703003Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:55.309308 containerd[1605]: time="2025-02-13T15:25:55.308771019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:55.312176 containerd[1605]: time="2025-02-13T15:25:55.312155473Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:25:55.339821 containerd[1605]: time="2025-02-13T15:25:55.339771645Z" level=info msg="CreateContainer within sandbox \"c4c9ffa1af486510e06fbaa2f32124e5b8e45a567179b6b571573d5da4ca556d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d63a9b1fcb157b50ba46ffb5e3a1c36e227494094a643e5631e51ba63d256a5d\"" Feb 13 15:25:55.341154 containerd[1605]: time="2025-02-13T15:25:55.341122562Z" level=info msg="StartContainer for \"d63a9b1fcb157b50ba46ffb5e3a1c36e227494094a643e5631e51ba63d256a5d\"" Feb 13 15:25:55.668990 containerd[1605]: time="2025-02-13T15:25:55.668597684Z" level=info msg="StartContainer for \"d63a9b1fcb157b50ba46ffb5e3a1c36e227494094a643e5631e51ba63d256a5d\" returns successfully" Feb 13 15:25:55.677319 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:25:55.677428 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:25:55.793271 kubelet[2845]: I0213 15:25:55.793241 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4" Feb 13 15:25:55.794025 containerd[1605]: time="2025-02-13T15:25:55.793982926Z" level=info msg="StopPodSandbox for \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\"" Feb 13 15:25:55.794311 containerd[1605]: time="2025-02-13T15:25:55.794207177Z" level=info msg="Ensure that sandbox ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4 in task-service has been cleanup successfully" Feb 13 15:25:55.794552 containerd[1605]: time="2025-02-13T15:25:55.794512310Z" level=info msg="TearDown network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" successfully" Feb 13 15:25:55.794552 containerd[1605]: time="2025-02-13T15:25:55.794540153Z" level=info msg="StopPodSandbox for \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" returns successfully" Feb 13 15:25:55.795090 containerd[1605]: time="2025-02-13T15:25:55.795057504Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" Feb 13 15:25:55.795175 containerd[1605]: time="2025-02-13T15:25:55.795154335Z" level=info msg="TearDown network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" successfully" Feb 13 15:25:55.795175 containerd[1605]: time="2025-02-13T15:25:55.795170957Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" returns successfully" Feb 13 15:25:55.795491 containerd[1605]: time="2025-02-13T15:25:55.795460260Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:25:55.795551 containerd[1605]: time="2025-02-13T15:25:55.795534790Z" level=info msg="TearDown network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" successfully" Feb 13 15:25:55.795551 containerd[1605]: time="2025-02-13T15:25:55.795548235Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" returns successfully" Feb 13 15:25:55.795799 containerd[1605]: time="2025-02-13T15:25:55.795772185Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:25:55.795891 containerd[1605]: time="2025-02-13T15:25:55.795860432Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:25:55.795941 containerd[1605]: time="2025-02-13T15:25:55.795889055Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:25:55.796301 containerd[1605]: time="2025-02-13T15:25:55.796278215Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:25:55.796633 containerd[1605]: time="2025-02-13T15:25:55.796615348Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:25:55.796633 containerd[1605]: time="2025-02-13T15:25:55.796630587Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:25:55.797291 containerd[1605]: time="2025-02-13T15:25:55.797057949Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:25:55.797291 containerd[1605]: time="2025-02-13T15:25:55.797171453Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:25:55.797291 containerd[1605]: time="2025-02-13T15:25:55.797185079Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:25:55.797542 kubelet[2845]: E0213 15:25:55.797518 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:55.797933 containerd[1605]: time="2025-02-13T15:25:55.797897776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:6,}" Feb 13 15:25:55.799087 kubelet[2845]: I0213 15:25:55.798755 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85" Feb 13 15:25:55.800594 containerd[1605]: time="2025-02-13T15:25:55.800540609Z" level=info msg="StopPodSandbox for \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\"" Feb 13 15:25:55.800915 containerd[1605]: time="2025-02-13T15:25:55.800883893Z" level=info msg="Ensure that sandbox e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85 in task-service has been cleanup successfully" Feb 13 15:25:55.801614 containerd[1605]: time="2025-02-13T15:25:55.801541818Z" level=info msg="TearDown network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" successfully" Feb 13 15:25:55.801614 containerd[1605]: time="2025-02-13T15:25:55.801569079Z" level=info msg="StopPodSandbox for \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" returns successfully" Feb 13 15:25:55.802397 containerd[1605]: time="2025-02-13T15:25:55.802369482Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" Feb 13 15:25:55.802544 containerd[1605]: time="2025-02-13T15:25:55.802461234Z" level=info msg="TearDown network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" successfully" Feb 13 15:25:55.802544 containerd[1605]: time="2025-02-13T15:25:55.802516478Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" returns successfully" Feb 13 15:25:55.803808 containerd[1605]: time="2025-02-13T15:25:55.803364200Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:25:55.804224 containerd[1605]: time="2025-02-13T15:25:55.804112505Z" level=info msg="TearDown network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" successfully" Feb 13 15:25:55.804224 containerd[1605]: time="2025-02-13T15:25:55.804151819Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" returns successfully" Feb 13 15:25:55.805110 containerd[1605]: time="2025-02-13T15:25:55.805079150Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:25:55.805219 containerd[1605]: time="2025-02-13T15:25:55.805173507Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:25:55.805219 containerd[1605]: time="2025-02-13T15:25:55.805187974Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:25:55.805979 kubelet[2845]: E0213 15:25:55.805917 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:55.806458 containerd[1605]: time="2025-02-13T15:25:55.806041256Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:25:55.806458 containerd[1605]: time="2025-02-13T15:25:55.806182232Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:25:55.806458 containerd[1605]: time="2025-02-13T15:25:55.806193903Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:25:55.806970 containerd[1605]: time="2025-02-13T15:25:55.806809008Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:25:55.807199 containerd[1605]: time="2025-02-13T15:25:55.807175857Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:25:55.807199 containerd[1605]: time="2025-02-13T15:25:55.807193600Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:25:55.807582 kubelet[2845]: I0213 15:25:55.807569 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e" Feb 13 15:25:55.808324 containerd[1605]: time="2025-02-13T15:25:55.808073292Z" level=info msg="StopPodSandbox for \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\"" Feb 13 15:25:55.808482 containerd[1605]: time="2025-02-13T15:25:55.808176125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:6,}" Feb 13 15:25:55.809838 containerd[1605]: time="2025-02-13T15:25:55.808949247Z" level=info msg="Ensure that sandbox f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e in task-service has been cleanup successfully" Feb 13 15:25:55.809838 containerd[1605]: time="2025-02-13T15:25:55.809127451Z" level=info msg="TearDown network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" successfully" Feb 13 15:25:55.809838 containerd[1605]: time="2025-02-13T15:25:55.809150464Z" level=info msg="StopPodSandbox for \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" returns successfully" Feb 13 15:25:55.810470 containerd[1605]: time="2025-02-13T15:25:55.810439144Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" Feb 13 15:25:55.810663 containerd[1605]: time="2025-02-13T15:25:55.810631205Z" level=info msg="TearDown network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" successfully" Feb 13 15:25:55.810663 containerd[1605]: time="2025-02-13T15:25:55.810650621Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" returns successfully" Feb 13 15:25:55.811770 containerd[1605]: time="2025-02-13T15:25:55.811627365Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:25:55.811770 containerd[1605]: time="2025-02-13T15:25:55.811715410Z" level=info msg="TearDown network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" successfully" Feb 13 15:25:55.811770 containerd[1605]: time="2025-02-13T15:25:55.811725910Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" returns successfully" Feb 13 15:25:55.812628 containerd[1605]: time="2025-02-13T15:25:55.812595473Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:25:55.812693 containerd[1605]: time="2025-02-13T15:25:55.812675863Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:25:55.812693 containerd[1605]: time="2025-02-13T15:25:55.812688908Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:25:55.813192 containerd[1605]: time="2025-02-13T15:25:55.813024849Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:25:55.813192 containerd[1605]: time="2025-02-13T15:25:55.813099990Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:25:55.813192 containerd[1605]: time="2025-02-13T15:25:55.813108736Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:25:55.814237 containerd[1605]: time="2025-02-13T15:25:55.814046657Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:25:55.814237 containerd[1605]: time="2025-02-13T15:25:55.814135784Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:25:55.814237 containerd[1605]: time="2025-02-13T15:25:55.814168326Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:25:55.814895 containerd[1605]: time="2025-02-13T15:25:55.814701187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:25:55.815623 kubelet[2845]: I0213 15:25:55.815274 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839" Feb 13 15:25:55.815760 containerd[1605]: time="2025-02-13T15:25:55.815727533Z" level=info msg="StopPodSandbox for \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\"" Feb 13 15:25:55.816158 containerd[1605]: time="2025-02-13T15:25:55.816099041Z" level=info msg="Ensure that sandbox d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839 in task-service has been cleanup successfully" Feb 13 15:25:55.816445 containerd[1605]: time="2025-02-13T15:25:55.816418451Z" level=info msg="TearDown network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" successfully" Feb 13 15:25:55.816445 containerd[1605]: time="2025-02-13T15:25:55.816441013Z" level=info msg="StopPodSandbox for \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" returns successfully" Feb 13 15:25:55.817043 containerd[1605]: time="2025-02-13T15:25:55.816891209Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" Feb 13 15:25:55.817043 containerd[1605]: time="2025-02-13T15:25:55.816995214Z" level=info msg="TearDown network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" successfully" Feb 13 15:25:55.817043 containerd[1605]: time="2025-02-13T15:25:55.817004862Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" returns successfully" Feb 13 15:25:55.817855 containerd[1605]: time="2025-02-13T15:25:55.817698604Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:25:55.817855 containerd[1605]: time="2025-02-13T15:25:55.817795887Z" level=info msg="TearDown network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" successfully" Feb 13 15:25:55.817855 containerd[1605]: time="2025-02-13T15:25:55.817809202Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" returns successfully" Feb 13 15:25:55.819249 kubelet[2845]: I0213 15:25:55.818414 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mp9n6" podStartSLOduration=2.262570926 podStartE2EDuration="18.818380865s" podCreationTimestamp="2025-02-13 15:25:37 +0000 UTC" firstStartedPulling="2025-02-13 15:25:38.744707112 +0000 UTC m=+25.354071294" lastFinishedPulling="2025-02-13 15:25:55.300517051 +0000 UTC m=+41.909881233" observedRunningTime="2025-02-13 15:25:55.817558441 +0000 UTC m=+42.426922623" watchObservedRunningTime="2025-02-13 15:25:55.818380865 +0000 UTC m=+42.427745047" Feb 13 15:25:55.819830 containerd[1605]: time="2025-02-13T15:25:55.819699141Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:25:55.819830 containerd[1605]: time="2025-02-13T15:25:55.819782497Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:25:55.819830 containerd[1605]: time="2025-02-13T15:25:55.819794018Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:25:55.821337 containerd[1605]: time="2025-02-13T15:25:55.821317819Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:25:55.822777 kubelet[2845]: I0213 15:25:55.822748 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645" Feb 13 15:25:55.829021 containerd[1605]: time="2025-02-13T15:25:55.828985737Z" level=info msg="StopPodSandbox for \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\"" Feb 13 15:25:55.831724 containerd[1605]: time="2025-02-13T15:25:55.829585673Z" level=info msg="Ensure that sandbox 5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645 in task-service has been cleanup successfully" Feb 13 15:25:55.832227 containerd[1605]: time="2025-02-13T15:25:55.832208598Z" level=info msg="TearDown network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" successfully" Feb 13 15:25:55.832289 containerd[1605]: time="2025-02-13T15:25:55.832276576Z" level=info msg="StopPodSandbox for \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" returns successfully" Feb 13 15:25:55.838868 containerd[1605]: time="2025-02-13T15:25:55.838010993Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" Feb 13 15:25:55.839813 containerd[1605]: time="2025-02-13T15:25:55.839359114Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:25:55.839813 containerd[1605]: time="2025-02-13T15:25:55.839412434Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:25:55.839813 containerd[1605]: time="2025-02-13T15:25:55.839685857Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:25:55.840015 containerd[1605]: time="2025-02-13T15:25:55.839780215Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:25:55.840116 containerd[1605]: time="2025-02-13T15:25:55.840087671Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:25:55.840276 containerd[1605]: time="2025-02-13T15:25:55.839939934Z" level=info msg="TearDown network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" successfully" Feb 13 15:25:55.840388 containerd[1605]: time="2025-02-13T15:25:55.840349523Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" returns successfully" Feb 13 15:25:55.841398 containerd[1605]: time="2025-02-13T15:25:55.841002549Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:25:55.841398 containerd[1605]: time="2025-02-13T15:25:55.841085726Z" level=info msg="TearDown network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" successfully" Feb 13 15:25:55.841398 containerd[1605]: time="2025-02-13T15:25:55.841100033Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" returns successfully" Feb 13 15:25:55.841398 containerd[1605]: time="2025-02-13T15:25:55.841203146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:6,}" Feb 13 15:25:55.841722 containerd[1605]: time="2025-02-13T15:25:55.841701582Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:25:55.841999 containerd[1605]: time="2025-02-13T15:25:55.841894153Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:25:55.842039 kubelet[2845]: I0213 15:25:55.842000 2845 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6" Feb 13 15:25:55.842197 containerd[1605]: time="2025-02-13T15:25:55.842121991Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:25:55.843315 containerd[1605]: time="2025-02-13T15:25:55.842831053Z" level=info msg="StopPodSandbox for \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\"" Feb 13 15:25:55.843315 containerd[1605]: time="2025-02-13T15:25:55.843063609Z" level=info msg="Ensure that sandbox b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6 in task-service has been cleanup successfully" Feb 13 15:25:55.843688 containerd[1605]: time="2025-02-13T15:25:55.843454964Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:25:55.843688 containerd[1605]: time="2025-02-13T15:25:55.843540845Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:25:55.843688 containerd[1605]: time="2025-02-13T15:25:55.843550774Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:25:55.844640 containerd[1605]: time="2025-02-13T15:25:55.843848213Z" level=info msg="TearDown network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" successfully" Feb 13 15:25:55.844640 containerd[1605]: time="2025-02-13T15:25:55.843868230Z" level=info msg="StopPodSandbox for \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" returns successfully" Feb 13 15:25:55.844640 containerd[1605]: time="2025-02-13T15:25:55.843968218Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:25:55.844640 containerd[1605]: time="2025-02-13T15:25:55.844044360Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:25:55.844640 containerd[1605]: time="2025-02-13T15:25:55.844057856Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:25:55.844954 kubelet[2845]: E0213 15:25:55.844941 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:55.845622 containerd[1605]: time="2025-02-13T15:25:55.845588079Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" Feb 13 15:25:55.845779 containerd[1605]: time="2025-02-13T15:25:55.845632763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:6,}" Feb 13 15:25:55.845906 containerd[1605]: time="2025-02-13T15:25:55.845891518Z" level=info msg="TearDown network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" successfully" Feb 13 15:25:55.846032 containerd[1605]: time="2025-02-13T15:25:55.846019489Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" returns successfully" Feb 13 15:25:55.848222 containerd[1605]: time="2025-02-13T15:25:55.848201336Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:25:55.848291 containerd[1605]: time="2025-02-13T15:25:55.848279693Z" level=info msg="TearDown network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" successfully" Feb 13 15:25:55.848291 containerd[1605]: time="2025-02-13T15:25:55.848288680Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" returns successfully" Feb 13 15:25:55.848590 containerd[1605]: time="2025-02-13T15:25:55.848571070Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:25:55.848734 containerd[1605]: time="2025-02-13T15:25:55.848684644Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:25:55.848734 containerd[1605]: time="2025-02-13T15:25:55.848724548Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:25:55.849075 containerd[1605]: time="2025-02-13T15:25:55.849045651Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:25:55.849207 containerd[1605]: time="2025-02-13T15:25:55.849186175Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:25:55.849241 containerd[1605]: time="2025-02-13T15:25:55.849207215Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:25:55.849515 containerd[1605]: time="2025-02-13T15:25:55.849497339Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:25:55.849585 containerd[1605]: time="2025-02-13T15:25:55.849572891Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:25:55.849609 containerd[1605]: time="2025-02-13T15:25:55.849584273Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:25:55.849926 containerd[1605]: time="2025-02-13T15:25:55.849904944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:25:56.085222 systemd-networkd[1250]: cali01577924a13: Link UP Feb 13 15:25:56.085542 systemd-networkd[1250]: cali01577924a13: Gained carrier Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:55.952 [INFO][5091] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:55.970 [INFO][5091] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--cc6ff-eth0 coredns-76f75df574- kube-system 89340465-825b-41b6-aad8-3f7e90dab570 709 0 2025-02-13 15:25:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-cc6ff eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01577924a13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:55.970 [INFO][5091] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.047 [INFO][5158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" HandleID="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Workload="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" HandleID="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Workload="localhost-k8s-coredns--76f75df574--cc6ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-cc6ff", "timestamp":"2025-02-13 15:25:56.047024537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5158] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.055 [INFO][5158] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.059 [INFO][5158] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.062 [INFO][5158] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.063 [INFO][5158] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.064 [INFO][5158] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.064 [INFO][5158] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.066 [INFO][5158] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56 Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.069 [INFO][5158] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.073 [INFO][5158] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.074 [INFO][5158] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" host="localhost" Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.074 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.102903 containerd[1605]: 2025-02-13 15:25:56.074 [INFO][5158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" HandleID="k8s-pod-network.9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Workload="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.077 [INFO][5091] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cc6ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"89340465-825b-41b6-aad8-3f7e90dab570", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-cc6ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01577924a13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.077 [INFO][5091] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.077 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01577924a13 ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.085 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.086 [INFO][5091] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cc6ff-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"89340465-825b-41b6-aad8-3f7e90dab570", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56", Pod:"coredns-76f75df574-cc6ff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01577924a13", MAC:"46:e5:90:df:78:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.103628 containerd[1605]: 2025-02-13 15:25:56.100 [INFO][5091] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56" Namespace="kube-system" Pod="coredns-76f75df574-cc6ff" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cc6ff-eth0" Feb 13 15:25:56.153955 systemd-networkd[1250]: cali7d56efa4a12: Link UP Feb 13 15:25:56.154160 systemd-networkd[1250]: cali7d56efa4a12: Gained carrier Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:55.882 [INFO][5039] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:55.911 [INFO][5039] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--b899m-eth0 coredns-76f75df574- kube-system 6773e7e4-f620-4ef0-9bbb-a553c14f7656 715 0 2025-02-13 15:25:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-b899m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d56efa4a12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:55.914 [INFO][5039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.040 [INFO][5120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" HandleID="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Workload="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.053 [INFO][5120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" HandleID="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Workload="localhost-k8s-coredns--76f75df574--b899m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042a1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-b899m", "timestamp":"2025-02-13 15:25:56.040053389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.053 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.074 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.074 [INFO][5120] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.076 [INFO][5120] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.082 [INFO][5120] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.090 [INFO][5120] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.092 [INFO][5120] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.101 [INFO][5120] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.101 [INFO][5120] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.103 [INFO][5120] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.109 [INFO][5120] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.116 [INFO][5120] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.116 [INFO][5120] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" host="localhost" Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.116 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.167845 containerd[1605]: 2025-02-13 15:25:56.116 [INFO][5120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" HandleID="k8s-pod-network.e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Workload="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.147 [INFO][5039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--b899m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6773e7e4-f620-4ef0-9bbb-a553c14f7656", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-b899m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d56efa4a12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.147 [INFO][5039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.147 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d56efa4a12 ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.154 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.154 [INFO][5039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--b899m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"6773e7e4-f620-4ef0-9bbb-a553c14f7656", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a", Pod:"coredns-76f75df574-b899m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d56efa4a12", MAC:"ae:1f:2d:17:b0:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.168410 containerd[1605]: 2025-02-13 15:25:56.163 [INFO][5039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a" Namespace="kube-system" Pod="coredns-76f75df574-b899m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--b899m-eth0" Feb 13 15:25:56.188386 systemd-networkd[1250]: calidaa943e5bf7: Link UP Feb 13 15:25:56.188637 systemd-networkd[1250]: calidaa943e5bf7: Gained carrier Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:55.928 [INFO][5063] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:55.949 [INFO][5063] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hctxf-eth0 csi-node-driver- calico-system f4c1f8e5-b232-41b8-a095-6576678dbe57 610 0 2025-02-13 15:25:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hctxf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidaa943e5bf7 [] []}} ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:55.949 [INFO][5063] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.036 [INFO][5133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" HandleID="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Workload="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.053 [INFO][5133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" HandleID="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Workload="localhost-k8s-csi--node--driver--hctxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c3880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hctxf", "timestamp":"2025-02-13 15:25:56.036111679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.053 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.120 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.120 [INFO][5133] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.129 [INFO][5133] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.142 [INFO][5133] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.156 [INFO][5133] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.163 [INFO][5133] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.166 [INFO][5133] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.166 [INFO][5133] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.168 [INFO][5133] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32 Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.174 [INFO][5133] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5133] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5133] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" host="localhost" Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.207181 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" HandleID="k8s-pod-network.1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Workload="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.185 [INFO][5063] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hctxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f4c1f8e5-b232-41b8-a095-6576678dbe57", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hctxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidaa943e5bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.185 [INFO][5063] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.185 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaa943e5bf7 ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.189 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.189 [INFO][5063] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hctxf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f4c1f8e5-b232-41b8-a095-6576678dbe57", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32", Pod:"csi-node-driver-hctxf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidaa943e5bf7", MAC:"da:ef:f9:dd:a2:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.208050 containerd[1605]: 2025-02-13 15:25:56.198 [INFO][5063] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32" Namespace="calico-system" Pod="csi-node-driver-hctxf" WorkloadEndpoint="localhost-k8s-csi--node--driver--hctxf-eth0" Feb 13 15:25:56.218564 containerd[1605]: time="2025-02-13T15:25:56.218445711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.218564 containerd[1605]: time="2025-02-13T15:25:56.218508730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.218564 containerd[1605]: time="2025-02-13T15:25:56.218519671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.220289 containerd[1605]: time="2025-02-13T15:25:56.218640407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.221974 systemd[1]: run-netns-cni\x2d650081b4\x2d4343\x2d858d\x2dc24b\x2dd749eff4a619.mount: Deactivated successfully. Feb 13 15:25:56.222339 containerd[1605]: time="2025-02-13T15:25:56.221729546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.222339 containerd[1605]: time="2025-02-13T15:25:56.221769521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.222339 containerd[1605]: time="2025-02-13T15:25:56.221785441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.222339 containerd[1605]: time="2025-02-13T15:25:56.221883727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.222944 systemd[1]: run-netns-cni\x2d9a0bd031\x2d6709\x2d6692\x2d838e\x2ded8170cd830c.mount: Deactivated successfully. Feb 13 15:25:56.223087 systemd[1]: run-netns-cni\x2dc9242eaa\x2d347f\x2d12c8\x2da4a2\x2d86115d8cb662.mount: Deactivated successfully. Feb 13 15:25:56.223231 systemd[1]: run-netns-cni\x2d7f2b9ce3\x2d70d3\x2da0be\x2d8cbb\x2d928ccbe412fe.mount: Deactivated successfully. Feb 13 15:25:56.223361 systemd[1]: run-netns-cni\x2d39cd965c\x2d4d23\x2d5352\x2d3e6b\x2dab07f33f7cda.mount: Deactivated successfully. Feb 13 15:25:56.223490 systemd[1]: run-netns-cni\x2d05f65c20\x2d0b1a\x2dff5c\x2da475\x2d31f3d6602b96.mount: Deactivated successfully. Feb 13 15:25:56.234679 systemd-networkd[1250]: cali6eddeb13143: Link UP Feb 13 15:25:56.234883 systemd-networkd[1250]: cali6eddeb13143: Gained carrier Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:55.949 [INFO][5076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:55.965 [INFO][5076] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0 calico-kube-controllers-9b8d49698- calico-system 986d0026-8722-498a-9efe-05e1624276c3 712 0 2025-02-13 15:25:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9b8d49698 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9b8d49698-qwwkp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6eddeb13143 [] []}} ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:55.965 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.041 [INFO][5152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" HandleID="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Workload="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" HandleID="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Workload="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ce570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9b8d49698-qwwkp", "timestamp":"2025-02-13 15:25:56.040451507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.181 [INFO][5152] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.183 [INFO][5152] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.190 [INFO][5152] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.197 [INFO][5152] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.201 [INFO][5152] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.203 [INFO][5152] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.203 [INFO][5152] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.204 [INFO][5152] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90 Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.211 [INFO][5152] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.218 [INFO][5152] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.218 [INFO][5152] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" host="localhost" Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.218 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.250929 containerd[1605]: 2025-02-13 15:25:56.218 [INFO][5152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" HandleID="k8s-pod-network.370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Workload="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.225 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0", GenerateName:"calico-kube-controllers-9b8d49698-", Namespace:"calico-system", SelfLink:"", UID:"986d0026-8722-498a-9efe-05e1624276c3", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b8d49698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9b8d49698-qwwkp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eddeb13143", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.225 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.225 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eddeb13143 ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.230 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.231 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0", GenerateName:"calico-kube-controllers-9b8d49698-", Namespace:"calico-system", SelfLink:"", UID:"986d0026-8722-498a-9efe-05e1624276c3", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b8d49698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90", Pod:"calico-kube-controllers-9b8d49698-qwwkp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eddeb13143", MAC:"52:ba:64:b3:96:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.251477 containerd[1605]: 2025-02-13 15:25:56.245 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90" Namespace="calico-system" Pod="calico-kube-controllers-9b8d49698-qwwkp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9b8d49698--qwwkp-eth0" Feb 13 15:25:56.272999 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.278247 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.287193 systemd-networkd[1250]: cali06a09649594: Link UP Feb 13 15:25:56.287401 systemd-networkd[1250]: cali06a09649594: Gained carrier Feb 13 15:25:56.304293 containerd[1605]: time="2025-02-13T15:25:56.303947978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.304293 containerd[1605]: time="2025-02-13T15:25:56.304222793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.304293 containerd[1605]: time="2025-02-13T15:25:56.304238222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.304650 containerd[1605]: time="2025-02-13T15:25:56.304341336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:55.972 [INFO][5107] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:55.985 [INFO][5107] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0 calico-apiserver-5cfc58b57b- calico-apiserver 77fafed8-e30a-4324-8899-dc18d6f5bcb9 714 0 2025-02-13 15:25:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cfc58b57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cfc58b57b-p7rtt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06a09649594 [] []}} ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:55.986 [INFO][5107] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.036 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" HandleID="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" HandleID="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005119d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cfc58b57b-p7rtt", "timestamp":"2025-02-13 15:25:56.036340098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.221 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.221 [INFO][5164] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.224 [INFO][5164] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.235 [INFO][5164] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.242 [INFO][5164] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.245 [INFO][5164] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.253 [INFO][5164] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.254 [INFO][5164] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.257 [INFO][5164] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.261 [INFO][5164] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.270 [INFO][5164] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.270 [INFO][5164] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" host="localhost" Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.270 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.311218 containerd[1605]: 2025-02-13 15:25:56.270 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" HandleID="k8s-pod-network.52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.280 [INFO][5107] cni-plugin/k8s.go 386: Populated endpoint ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0", GenerateName:"calico-apiserver-5cfc58b57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77fafed8-e30a-4324-8899-dc18d6f5bcb9", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cfc58b57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cfc58b57b-p7rtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06a09649594", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.280 [INFO][5107] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.280 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06a09649594 ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.285 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.288 [INFO][5107] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0", GenerateName:"calico-apiserver-5cfc58b57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77fafed8-e30a-4324-8899-dc18d6f5bcb9", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cfc58b57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c", Pod:"calico-apiserver-5cfc58b57b-p7rtt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06a09649594", MAC:"66:53:9c:fc:90:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.311757 containerd[1605]: 2025-02-13 15:25:56.297 [INFO][5107] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-p7rtt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--p7rtt-eth0" Feb 13 15:25:56.312416 containerd[1605]: time="2025-02-13T15:25:56.311314498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.312416 containerd[1605]: time="2025-02-13T15:25:56.311464800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.312416 containerd[1605]: time="2025-02-13T15:25:56.311486901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.313087 containerd[1605]: time="2025-02-13T15:25:56.312287715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.335172 containerd[1605]: time="2025-02-13T15:25:56.335112948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cc6ff,Uid:89340465-825b-41b6-aad8-3f7e90dab570,Namespace:kube-system,Attempt:6,} returns sandbox id \"9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56\"" Feb 13 15:25:56.335962 kubelet[2845]: E0213 15:25:56.335888 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:56.337009 containerd[1605]: time="2025-02-13T15:25:56.336989882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b899m,Uid:6773e7e4-f620-4ef0-9bbb-a553c14f7656,Namespace:kube-system,Attempt:6,} returns sandbox id \"e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a\"" Feb 13 15:25:56.338075 kubelet[2845]: E0213 15:25:56.338063 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:56.338986 containerd[1605]: time="2025-02-13T15:25:56.338967906Z" level=info msg="CreateContainer within sandbox \"9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:25:56.346272 containerd[1605]: time="2025-02-13T15:25:56.346237864Z" level=info msg="CreateContainer within sandbox \"e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:25:56.346442 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.352599 systemd-networkd[1250]: calie5ddf0c9cc2: Link UP Feb 13 15:25:56.353689 systemd-networkd[1250]: calie5ddf0c9cc2: Gained carrier Feb 13 15:25:56.369159 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:55.934 [INFO][5051] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:55.948 [INFO][5051] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0 calico-apiserver-5cfc58b57b- calico-apiserver 84ad88b0-6adb-4b60-9716-75d388d2367c 713 0 2025-02-13 15:25:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cfc58b57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cfc58b57b-pxnbk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5ddf0c9cc2 [] []}} ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:55.948 [INFO][5051] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.041 [INFO][5134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" HandleID="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" HandleID="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cfc58b57b-pxnbk", "timestamp":"2025-02-13 15:25:56.041700211 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.054 [INFO][5134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.271 [INFO][5134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.271 [INFO][5134] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.277 [INFO][5134] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.283 [INFO][5134] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.287 [INFO][5134] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.292 [INFO][5134] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.300 [INFO][5134] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.300 [INFO][5134] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.303 [INFO][5134] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.311 [INFO][5134] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.323 [INFO][5134] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.324 [INFO][5134] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" host="localhost" Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.325 [INFO][5134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:25:56.372724 containerd[1605]: 2025-02-13 15:25:56.325 [INFO][5134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" HandleID="k8s-pod-network.2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Workload="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.347 [INFO][5051] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0", GenerateName:"calico-apiserver-5cfc58b57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ad88b0-6adb-4b60-9716-75d388d2367c", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cfc58b57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cfc58b57b-pxnbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5ddf0c9cc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.348 [INFO][5051] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.348 [INFO][5051] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5ddf0c9cc2 ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.353 [INFO][5051] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.354 [INFO][5051] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0", GenerateName:"calico-apiserver-5cfc58b57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"84ad88b0-6adb-4b60-9716-75d388d2367c", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cfc58b57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f", Pod:"calico-apiserver-5cfc58b57b-pxnbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5ddf0c9cc2", MAC:"da:1b:e0:d7:17:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:25:56.373630 containerd[1605]: 2025-02-13 15:25:56.364 [INFO][5051] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f" Namespace="calico-apiserver" Pod="calico-apiserver-5cfc58b57b-pxnbk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cfc58b57b--pxnbk-eth0" Feb 13 15:25:56.377031 containerd[1605]: time="2025-02-13T15:25:56.376001694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.377031 containerd[1605]: time="2025-02-13T15:25:56.376064904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.377031 containerd[1605]: time="2025-02-13T15:25:56.376078399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.377031 containerd[1605]: time="2025-02-13T15:25:56.376189337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.384676 containerd[1605]: time="2025-02-13T15:25:56.384637007Z" level=info msg="CreateContainer within sandbox \"9fc9d9f2de81ac9db3630a8bd4412452ab5832405b7f0ac2ea221d1bb3967b56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"938927ca79a995931e4bc6a7042ce719721727288ab9b624bb33b5c586b8104f\"" Feb 13 15:25:56.388079 containerd[1605]: time="2025-02-13T15:25:56.387262787Z" level=info msg="StartContainer for \"938927ca79a995931e4bc6a7042ce719721727288ab9b624bb33b5c586b8104f\"" Feb 13 15:25:56.396612 containerd[1605]: time="2025-02-13T15:25:56.396407175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hctxf,Uid:f4c1f8e5-b232-41b8-a095-6576678dbe57,Namespace:calico-system,Attempt:6,} returns sandbox id \"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32\"" Feb 13 15:25:56.399310 containerd[1605]: time="2025-02-13T15:25:56.399285208Z" level=info msg="CreateContainer within sandbox \"e438dc9bcd77263a62cba656c443ebc856000dfa7d523e2ac1d3a7e21313094a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf2f282881144c9374d03ca61a18058f9970bf199b461e427bfc12dc18f4c27b\"" Feb 13 15:25:56.404205 containerd[1605]: time="2025-02-13T15:25:56.404183223Z" level=info msg="StartContainer for \"cf2f282881144c9374d03ca61a18058f9970bf199b461e427bfc12dc18f4c27b\"" Feb 13 15:25:56.404257 containerd[1605]: time="2025-02-13T15:25:56.404202230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:25:56.407514 containerd[1605]: time="2025-02-13T15:25:56.407445548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b8d49698-qwwkp,Uid:986d0026-8722-498a-9efe-05e1624276c3,Namespace:calico-system,Attempt:6,} returns sandbox id \"370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90\"" Feb 13 15:25:56.414423 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.420211 containerd[1605]: time="2025-02-13T15:25:56.419921952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:56.420211 containerd[1605]: time="2025-02-13T15:25:56.419989169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:56.420819 containerd[1605]: time="2025-02-13T15:25:56.420200125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.420819 containerd[1605]: time="2025-02-13T15:25:56.420562966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:56.450575 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:56.458946 containerd[1605]: time="2025-02-13T15:25:56.458881397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-p7rtt,Uid:77fafed8-e30a-4324-8899-dc18d6f5bcb9,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c\"" Feb 13 15:25:56.486471 containerd[1605]: time="2025-02-13T15:25:56.486429738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cfc58b57b-pxnbk,Uid:84ad88b0-6adb-4b60-9716-75d388d2367c,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f\"" Feb 13 15:25:56.496851 containerd[1605]: time="2025-02-13T15:25:56.496802321Z" level=info msg="StartContainer for \"938927ca79a995931e4bc6a7042ce719721727288ab9b624bb33b5c586b8104f\" returns successfully" Feb 13 15:25:56.497016 containerd[1605]: time="2025-02-13T15:25:56.496884476Z" level=info msg="StartContainer for \"cf2f282881144c9374d03ca61a18058f9970bf199b461e427bfc12dc18f4c27b\" returns successfully" Feb 13 15:25:56.857113 kubelet[2845]: E0213 15:25:56.857087 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:56.863987 kubelet[2845]: E0213 15:25:56.863562 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:56.866207 kubelet[2845]: E0213 15:25:56.866190 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:56.868087 kubelet[2845]: I0213 15:25:56.867779 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b899m" podStartSLOduration=28.867749494999998 podStartE2EDuration="28.867749495s" podCreationTimestamp="2025-02-13 15:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:56.867529382 +0000 UTC m=+43.476893564" watchObservedRunningTime="2025-02-13 15:25:56.867749495 +0000 UTC m=+43.477113677" Feb 13 15:25:56.881665 kubelet[2845]: I0213 15:25:56.881613 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cc6ff" podStartSLOduration=28.881573258 podStartE2EDuration="28.881573258s" podCreationTimestamp="2025-02-13 15:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:25:56.881416955 +0000 UTC m=+43.490781137" watchObservedRunningTime="2025-02-13 15:25:56.881573258 +0000 UTC m=+43.490937440" Feb 13 15:25:57.185179 kernel: bpftool[5730]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:25:57.414577 systemd-networkd[1250]: vxlan.calico: Link UP Feb 13 15:25:57.415079 systemd-networkd[1250]: vxlan.calico: Gained carrier Feb 13 15:25:57.460940 systemd-networkd[1250]: cali06a09649594: Gained IPv6LL Feb 13 15:25:57.464183 systemd-networkd[1250]: cali6eddeb13143: Gained IPv6LL Feb 13 15:25:57.524295 systemd-networkd[1250]: cali7d56efa4a12: Gained IPv6LL Feb 13 15:25:57.717250 systemd-networkd[1250]: cali01577924a13: Gained IPv6LL Feb 13 15:25:57.870888 kubelet[2845]: E0213 15:25:57.870844 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:57.871636 kubelet[2845]: E0213 15:25:57.871609 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:57.871713 kubelet[2845]: E0213 15:25:57.871694 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:57.908283 systemd-networkd[1250]: calidaa943e5bf7: Gained IPv6LL Feb 13 15:25:58.228415 systemd-networkd[1250]: calie5ddf0c9cc2: Gained IPv6LL Feb 13 15:25:58.277234 containerd[1605]: time="2025-02-13T15:25:58.277182562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:58.277942 containerd[1605]: time="2025-02-13T15:25:58.277908755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:25:58.279119 containerd[1605]: time="2025-02-13T15:25:58.279086215Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:58.281357 containerd[1605]: time="2025-02-13T15:25:58.281315300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:58.281828 containerd[1605]: time="2025-02-13T15:25:58.281800690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.877559347s" Feb 13 15:25:58.281859 containerd[1605]: time="2025-02-13T15:25:58.281837489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:25:58.282330 containerd[1605]: time="2025-02-13T15:25:58.282290801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:25:58.284845 containerd[1605]: time="2025-02-13T15:25:58.284797987Z" level=info msg="CreateContainer within sandbox \"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:25:58.319186 containerd[1605]: time="2025-02-13T15:25:58.319134153Z" level=info msg="CreateContainer within sandbox \"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b4c909acca3136c7bd07e03116f30a5fa63d213f68053a93cf8e0e3494971e5f\"" Feb 13 15:25:58.320174 containerd[1605]: time="2025-02-13T15:25:58.319694084Z" level=info msg="StartContainer for \"b4c909acca3136c7bd07e03116f30a5fa63d213f68053a93cf8e0e3494971e5f\"" Feb 13 15:25:58.382174 containerd[1605]: time="2025-02-13T15:25:58.382114419Z" level=info msg="StartContainer for \"b4c909acca3136c7bd07e03116f30a5fa63d213f68053a93cf8e0e3494971e5f\" returns successfully" Feb 13 15:25:58.868284 systemd-networkd[1250]: vxlan.calico: Gained IPv6LL Feb 13 15:25:58.876820 kubelet[2845]: E0213 15:25:58.876770 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:58.877349 kubelet[2845]: E0213 15:25:58.876864 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:59.575383 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:35274.service - OpenSSH per-connection server daemon (10.0.0.1:35274). Feb 13 15:25:59.616833 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 35274 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:25:59.618357 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:59.622245 systemd-logind[1588]: New session 9 of user core. Feb 13 15:25:59.636433 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:25:59.762851 sshd[5875]: Connection closed by 10.0.0.1 port 35274 Feb 13 15:25:59.763224 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:59.766981 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:35274.service: Deactivated successfully. Feb 13 15:25:59.769385 systemd-logind[1588]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:25:59.769478 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:25:59.770652 systemd-logind[1588]: Removed session 9. Feb 13 15:26:01.225031 containerd[1605]: time="2025-02-13T15:26:01.224985242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:01.226354 containerd[1605]: time="2025-02-13T15:26:01.226294950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:26:01.227536 containerd[1605]: time="2025-02-13T15:26:01.227505181Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:01.229664 containerd[1605]: time="2025-02-13T15:26:01.229633836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:01.230241 containerd[1605]: time="2025-02-13T15:26:01.230200660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.947882829s" Feb 13 15:26:01.230241 containerd[1605]: time="2025-02-13T15:26:01.230237329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:26:01.230808 containerd[1605]: time="2025-02-13T15:26:01.230791600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:26:01.237597 containerd[1605]: time="2025-02-13T15:26:01.237564521Z" level=info msg="CreateContainer within sandbox \"370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:26:01.252691 containerd[1605]: time="2025-02-13T15:26:01.252650225Z" level=info msg="CreateContainer within sandbox \"370896a3e9318afd84b60af67729743683c4dc914554d508427ed47e2c4d1e90\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2de86b3cfef823254acd08c3be63cd90f59d46697abe80398d7ba7926344f853\"" Feb 13 15:26:01.253135 containerd[1605]: time="2025-02-13T15:26:01.253107975Z" level=info msg="StartContainer for \"2de86b3cfef823254acd08c3be63cd90f59d46697abe80398d7ba7926344f853\"" Feb 13 15:26:01.577732 containerd[1605]: time="2025-02-13T15:26:01.577600054Z" level=info msg="StartContainer for \"2de86b3cfef823254acd08c3be63cd90f59d46697abe80398d7ba7926344f853\" returns successfully" Feb 13 15:26:01.896015 kubelet[2845]: I0213 15:26:01.895516 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9b8d49698-qwwkp" podStartSLOduration=19.075534046 podStartE2EDuration="23.89546219s" podCreationTimestamp="2025-02-13 15:25:38 +0000 UTC" firstStartedPulling="2025-02-13 15:25:56.410648893 +0000 UTC m=+43.020013075" lastFinishedPulling="2025-02-13 15:26:01.230577037 +0000 UTC m=+47.839941219" observedRunningTime="2025-02-13 15:26:01.895253018 +0000 UTC m=+48.504617210" watchObservedRunningTime="2025-02-13 15:26:01.89546219 +0000 UTC m=+48.504826372" Feb 13 15:26:04.005491 containerd[1605]: time="2025-02-13T15:26:04.005441440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:04.006259 containerd[1605]: time="2025-02-13T15:26:04.006182902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:26:04.007529 containerd[1605]: time="2025-02-13T15:26:04.007476750Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:04.010524 containerd[1605]: time="2025-02-13T15:26:04.010485706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:04.011311 containerd[1605]: time="2025-02-13T15:26:04.011281559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.780411473s" Feb 13 15:26:04.011367 containerd[1605]: time="2025-02-13T15:26:04.011311105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:26:04.011874 containerd[1605]: time="2025-02-13T15:26:04.011850347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:26:04.013245 containerd[1605]: time="2025-02-13T15:26:04.013219236Z" level=info msg="CreateContainer within sandbox \"52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:26:04.033285 containerd[1605]: time="2025-02-13T15:26:04.033236200Z" level=info msg="CreateContainer within sandbox \"52e5ec2ea287edaf54fa375d365f9097c060b111dffc2560a6384fa6d93ceb2c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"229f5002153dd7247706aac486879955a7423d6a47ff8bb8748770bce06f2e6f\"" Feb 13 15:26:04.033836 containerd[1605]: time="2025-02-13T15:26:04.033804536Z" level=info msg="StartContainer for \"229f5002153dd7247706aac486879955a7423d6a47ff8bb8748770bce06f2e6f\"" Feb 13 15:26:04.108457 containerd[1605]: time="2025-02-13T15:26:04.108361411Z" level=info msg="StartContainer for \"229f5002153dd7247706aac486879955a7423d6a47ff8bb8748770bce06f2e6f\" returns successfully" Feb 13 15:26:04.499887 containerd[1605]: time="2025-02-13T15:26:04.499829341Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:04.500848 containerd[1605]: time="2025-02-13T15:26:04.500800463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:26:04.512885 containerd[1605]: time="2025-02-13T15:26:04.512846178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 500.964612ms" Feb 13 15:26:04.513027 containerd[1605]: time="2025-02-13T15:26:04.512902964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:26:04.514108 containerd[1605]: time="2025-02-13T15:26:04.514083591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:26:04.515205 containerd[1605]: time="2025-02-13T15:26:04.515174998Z" level=info msg="CreateContainer within sandbox \"2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:26:04.531822 containerd[1605]: time="2025-02-13T15:26:04.531776503Z" level=info msg="CreateContainer within sandbox \"2b90a1789a339fe5352baf26d3efb2bada5f62b2d80ca11b37b70d1cfa65725f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"599fd3d2e47f7f9e59fbae8b0c6917941a8acc58acc02fe6ab18a5a562b7b266\"" Feb 13 15:26:04.532375 containerd[1605]: time="2025-02-13T15:26:04.532353656Z" level=info msg="StartContainer for \"599fd3d2e47f7f9e59fbae8b0c6917941a8acc58acc02fe6ab18a5a562b7b266\"" Feb 13 15:26:04.712056 containerd[1605]: time="2025-02-13T15:26:04.711995943Z" level=info msg="StartContainer for \"599fd3d2e47f7f9e59fbae8b0c6917941a8acc58acc02fe6ab18a5a562b7b266\" returns successfully" Feb 13 15:26:04.775485 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Feb 13 15:26:04.824878 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:04.826405 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:04.831593 systemd-logind[1588]: New session 10 of user core. Feb 13 15:26:04.836426 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:26:04.940693 kubelet[2845]: I0213 15:26:04.939742 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cfc58b57b-pxnbk" podStartSLOduration=19.914881063 podStartE2EDuration="27.939685768s" podCreationTimestamp="2025-02-13 15:25:37 +0000 UTC" firstStartedPulling="2025-02-13 15:25:56.488436916 +0000 UTC m=+43.097801098" lastFinishedPulling="2025-02-13 15:26:04.513241621 +0000 UTC m=+51.122605803" observedRunningTime="2025-02-13 15:26:04.939377328 +0000 UTC m=+51.548741530" watchObservedRunningTime="2025-02-13 15:26:04.939685768 +0000 UTC m=+51.549049980" Feb 13 15:26:04.940693 kubelet[2845]: I0213 15:26:04.940014 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cfc58b57b-p7rtt" podStartSLOduration=20.391827291 podStartE2EDuration="27.939989367s" podCreationTimestamp="2025-02-13 15:25:37 +0000 UTC" firstStartedPulling="2025-02-13 15:25:56.463483217 +0000 UTC m=+43.072847399" lastFinishedPulling="2025-02-13 15:26:04.011645293 +0000 UTC m=+50.621009475" observedRunningTime="2025-02-13 15:26:04.923230257 +0000 UTC m=+51.532594459" watchObservedRunningTime="2025-02-13 15:26:04.939989367 +0000 UTC m=+51.549353549" Feb 13 15:26:05.026193 sshd[6044]: Connection closed by 10.0.0.1 port 37182 Feb 13 15:26:05.027077 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:05.031857 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:37182.service: Deactivated successfully. Feb 13 15:26:05.034854 systemd-logind[1588]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:26:05.034949 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:26:05.036038 systemd-logind[1588]: Removed session 10. Feb 13 15:26:05.923335 kubelet[2845]: I0213 15:26:05.923296 2845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:26:07.153248 containerd[1605]: time="2025-02-13T15:26:07.153189255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.154213 containerd[1605]: time="2025-02-13T15:26:07.154172139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:26:07.155478 containerd[1605]: time="2025-02-13T15:26:07.155435259Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.159099 containerd[1605]: time="2025-02-13T15:26:07.159015688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.159739 containerd[1605]: time="2025-02-13T15:26:07.159693319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.645579031s" Feb 13 15:26:07.159739 containerd[1605]: time="2025-02-13T15:26:07.159734437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:26:07.162764 containerd[1605]: time="2025-02-13T15:26:07.162740096Z" level=info msg="CreateContainer within sandbox \"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:26:07.183971 containerd[1605]: time="2025-02-13T15:26:07.183916652Z" level=info msg="CreateContainer within sandbox \"1a6ad8c5c53703b10b4372647e1ef61ca257dd0a9b7bd2b4788ef8489316dc32\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"34d117a427e82de269095e6bf4a16bee79506532c72605a9662f850bee4c02d1\"" Feb 13 15:26:07.184502 containerd[1605]: time="2025-02-13T15:26:07.184471252Z" level=info msg="StartContainer for \"34d117a427e82de269095e6bf4a16bee79506532c72605a9662f850bee4c02d1\"" Feb 13 15:26:07.254563 containerd[1605]: time="2025-02-13T15:26:07.254396028Z" level=info msg="StartContainer for \"34d117a427e82de269095e6bf4a16bee79506532c72605a9662f850bee4c02d1\" returns successfully" Feb 13 15:26:07.605781 kubelet[2845]: I0213 15:26:07.605655 2845 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:26:07.612707 kubelet[2845]: I0213 15:26:07.612688 2845 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:26:07.946893 kubelet[2845]: I0213 15:26:07.946298 2845 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:26:07.963924 kubelet[2845]: I0213 15:26:07.963774 2845 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hctxf" podStartSLOduration=20.20222943 podStartE2EDuration="30.9637294s" podCreationTimestamp="2025-02-13 15:25:37 +0000 UTC" firstStartedPulling="2025-02-13 15:25:56.399234403 +0000 UTC m=+43.008598585" lastFinishedPulling="2025-02-13 15:26:07.160734363 +0000 UTC m=+53.770098555" observedRunningTime="2025-02-13 15:26:07.945531192 +0000 UTC m=+54.554895374" watchObservedRunningTime="2025-02-13 15:26:07.9637294 +0000 UTC m=+54.573093582" Feb 13 15:26:10.041832 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:37188.service - OpenSSH per-connection server daemon (10.0.0.1:37188). Feb 13 15:26:10.086763 sshd[6115]: Accepted publickey for core from 10.0.0.1 port 37188 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:10.088622 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:10.093022 systemd-logind[1588]: New session 11 of user core. Feb 13 15:26:10.101418 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:26:10.220233 sshd[6118]: Connection closed by 10.0.0.1 port 37188 Feb 13 15:26:10.220609 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:10.229357 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). Feb 13 15:26:10.230055 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:37188.service: Deactivated successfully. Feb 13 15:26:10.232051 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:26:10.234004 systemd-logind[1588]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:26:10.235642 systemd-logind[1588]: Removed session 11. Feb 13 15:26:10.268195 sshd[6129]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:10.269518 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:10.273336 systemd-logind[1588]: New session 12 of user core. Feb 13 15:26:10.284621 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:26:10.449709 sshd[6134]: Connection closed by 10.0.0.1 port 37200 Feb 13 15:26:10.449677 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:10.454174 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:37200.service: Deactivated successfully. Feb 13 15:26:10.458837 systemd-logind[1588]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:26:10.460622 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:26:10.472552 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:37212.service - OpenSSH per-connection server daemon (10.0.0.1:37212). Feb 13 15:26:10.473651 systemd-logind[1588]: Removed session 12. Feb 13 15:26:10.520460 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 37212 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:10.522064 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:10.526495 systemd-logind[1588]: New session 13 of user core. Feb 13 15:26:10.540635 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:26:10.665186 sshd[6147]: Connection closed by 10.0.0.1 port 37212 Feb 13 15:26:10.665573 sshd-session[6144]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:10.670285 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:37212.service: Deactivated successfully. Feb 13 15:26:10.673387 systemd-logind[1588]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:26:10.673400 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:26:10.674937 systemd-logind[1588]: Removed session 13. Feb 13 15:26:13.505101 containerd[1605]: time="2025-02-13T15:26:13.505059803Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:26:13.505616 containerd[1605]: time="2025-02-13T15:26:13.505193922Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:26:13.505616 containerd[1605]: time="2025-02-13T15:26:13.505207639Z" level=info msg="StopPodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:26:13.505616 containerd[1605]: time="2025-02-13T15:26:13.505571853Z" level=info msg="RemovePodSandbox for \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:26:13.513989 containerd[1605]: time="2025-02-13T15:26:13.513959720Z" level=info msg="Forcibly stopping sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\"" Feb 13 15:26:13.514084 containerd[1605]: time="2025-02-13T15:26:13.514041308Z" level=info msg="TearDown network for sandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" successfully" Feb 13 15:26:13.526207 containerd[1605]: time="2025-02-13T15:26:13.526171641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.526287 containerd[1605]: time="2025-02-13T15:26:13.526228420Z" level=info msg="RemovePodSandbox \"e4e2f7ce670702bc6d8370c94ec919775c28645e2b01ed9e446066803a34963e\" returns successfully" Feb 13 15:26:13.526708 containerd[1605]: time="2025-02-13T15:26:13.526686196Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:26:13.526813 containerd[1605]: time="2025-02-13T15:26:13.526795186Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:26:13.526844 containerd[1605]: time="2025-02-13T15:26:13.526811808Z" level=info msg="StopPodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:26:13.527106 containerd[1605]: time="2025-02-13T15:26:13.527082893Z" level=info msg="RemovePodSandbox for \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:26:13.527192 containerd[1605]: time="2025-02-13T15:26:13.527108583Z" level=info msg="Forcibly stopping sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\"" Feb 13 15:26:13.527285 containerd[1605]: time="2025-02-13T15:26:13.527230258Z" level=info msg="TearDown network for sandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" successfully" Feb 13 15:26:13.531099 containerd[1605]: time="2025-02-13T15:26:13.531065723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.531174 containerd[1605]: time="2025-02-13T15:26:13.531112985Z" level=info msg="RemovePodSandbox \"8281b825bc41e87d5dba272f6db4a4a96619f0bbaf91046819e619da4b4f3d40\" returns successfully" Feb 13 15:26:13.531451 containerd[1605]: time="2025-02-13T15:26:13.531425279Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:26:13.531567 containerd[1605]: time="2025-02-13T15:26:13.531549599Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:26:13.531617 containerd[1605]: time="2025-02-13T15:26:13.531565730Z" level=info msg="StopPodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:26:13.531802 containerd[1605]: time="2025-02-13T15:26:13.531778631Z" level=info msg="RemovePodSandbox for \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:26:13.531866 containerd[1605]: time="2025-02-13T15:26:13.531806235Z" level=info msg="Forcibly stopping sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\"" Feb 13 15:26:13.531924 containerd[1605]: time="2025-02-13T15:26:13.531886350Z" level=info msg="TearDown network for sandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" successfully" Feb 13 15:26:13.535478 containerd[1605]: time="2025-02-13T15:26:13.535454368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.535552 containerd[1605]: time="2025-02-13T15:26:13.535508793Z" level=info msg="RemovePodSandbox \"38103d277b401765d46366d726f0f990f7fcf338dd50f85601f1e91a255046bc\" returns successfully" Feb 13 15:26:13.535773 containerd[1605]: time="2025-02-13T15:26:13.535741884Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:26:13.535865 containerd[1605]: time="2025-02-13T15:26:13.535843300Z" level=info msg="TearDown network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" successfully" Feb 13 15:26:13.535865 containerd[1605]: time="2025-02-13T15:26:13.535859362Z" level=info msg="StopPodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" returns successfully" Feb 13 15:26:13.536102 containerd[1605]: time="2025-02-13T15:26:13.536080109Z" level=info msg="RemovePodSandbox for \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:26:13.536165 containerd[1605]: time="2025-02-13T15:26:13.536104756Z" level=info msg="Forcibly stopping sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\"" Feb 13 15:26:13.536236 containerd[1605]: time="2025-02-13T15:26:13.536193808Z" level=info msg="TearDown network for sandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" successfully" Feb 13 15:26:13.539791 containerd[1605]: time="2025-02-13T15:26:13.539758671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.539872 containerd[1605]: time="2025-02-13T15:26:13.539804659Z" level=info msg="RemovePodSandbox \"77da0c2a0a8ddb18578a92147a433f5a78b1eb1124ee96cd154a8ddf650fc286\" returns successfully" Feb 13 15:26:13.540222 containerd[1605]: time="2025-02-13T15:26:13.540043040Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" Feb 13 15:26:13.540222 containerd[1605]: time="2025-02-13T15:26:13.540154155Z" level=info msg="TearDown network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" successfully" Feb 13 15:26:13.540222 containerd[1605]: time="2025-02-13T15:26:13.540168252Z" level=info msg="StopPodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" returns successfully" Feb 13 15:26:13.540414 containerd[1605]: time="2025-02-13T15:26:13.540391493Z" level=info msg="RemovePodSandbox for \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" Feb 13 15:26:13.540487 containerd[1605]: time="2025-02-13T15:26:13.540417374Z" level=info msg="Forcibly stopping sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\"" Feb 13 15:26:13.540540 containerd[1605]: time="2025-02-13T15:26:13.540491477Z" level=info msg="TearDown network for sandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" successfully" Feb 13 15:26:13.545663 containerd[1605]: time="2025-02-13T15:26:13.545630503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.545733 containerd[1605]: time="2025-02-13T15:26:13.545670590Z" level=info msg="RemovePodSandbox \"8a437ea3ab2053ee786e06a2882d96a2c8705b36b0fdd8f9bb3b04323d5dea7d\" returns successfully" Feb 13 15:26:13.545967 containerd[1605]: time="2025-02-13T15:26:13.545928789Z" level=info msg="StopPodSandbox for \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\"" Feb 13 15:26:13.546039 containerd[1605]: time="2025-02-13T15:26:13.546020407Z" level=info msg="TearDown network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" successfully" Feb 13 15:26:13.546091 containerd[1605]: time="2025-02-13T15:26:13.546037029Z" level=info msg="StopPodSandbox for \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" returns successfully" Feb 13 15:26:13.546291 containerd[1605]: time="2025-02-13T15:26:13.546258698Z" level=info msg="RemovePodSandbox for \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\"" Feb 13 15:26:13.546291 containerd[1605]: time="2025-02-13T15:26:13.546283115Z" level=info msg="Forcibly stopping sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\"" Feb 13 15:26:13.546402 containerd[1605]: time="2025-02-13T15:26:13.546363400Z" level=info msg="TearDown network for sandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" successfully" Feb 13 15:26:13.550495 containerd[1605]: time="2025-02-13T15:26:13.550462124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.550584 containerd[1605]: time="2025-02-13T15:26:13.550525437Z" level=info msg="RemovePodSandbox \"d12e5ee1b10d7d96ffbad83c5248570cbb636640ccb8d0b18d47c373cb35f839\" returns successfully" Feb 13 15:26:13.550795 containerd[1605]: time="2025-02-13T15:26:13.550767896Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:26:13.550856 containerd[1605]: time="2025-02-13T15:26:13.550849544Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:26:13.550885 containerd[1605]: time="2025-02-13T15:26:13.550858841Z" level=info msg="StopPodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:26:13.551103 containerd[1605]: time="2025-02-13T15:26:13.551067675Z" level=info msg="RemovePodSandbox for \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:26:13.551186 containerd[1605]: time="2025-02-13T15:26:13.551102963Z" level=info msg="Forcibly stopping sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\"" Feb 13 15:26:13.551242 containerd[1605]: time="2025-02-13T15:26:13.551223426Z" level=info msg="TearDown network for sandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" successfully" Feb 13 15:26:13.554891 containerd[1605]: time="2025-02-13T15:26:13.554855930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.555275 containerd[1605]: time="2025-02-13T15:26:13.554910385Z" level=info msg="RemovePodSandbox \"948639831ba1872db0fdbb7e69fc7ef0b2e10946d5adbec71f8ccbe6c8bb42a2\" returns successfully" Feb 13 15:26:13.555275 containerd[1605]: time="2025-02-13T15:26:13.555185236Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:26:13.555352 containerd[1605]: time="2025-02-13T15:26:13.555277074Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:26:13.555352 containerd[1605]: time="2025-02-13T15:26:13.555291041Z" level=info msg="StopPodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:26:13.556857 containerd[1605]: time="2025-02-13T15:26:13.555569399Z" level=info msg="RemovePodSandbox for \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:26:13.556857 containerd[1605]: time="2025-02-13T15:26:13.555593446Z" level=info msg="Forcibly stopping sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\"" Feb 13 15:26:13.556857 containerd[1605]: time="2025-02-13T15:26:13.555685263Z" level=info msg="TearDown network for sandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" successfully" Feb 13 15:26:13.559935 containerd[1605]: time="2025-02-13T15:26:13.559897597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.560015 containerd[1605]: time="2025-02-13T15:26:13.559963355Z" level=info msg="RemovePodSandbox \"bf965d53e7ffe54532c27b1518041e9007f9c0f07665c2cc179aa953b2db520d\" returns successfully" Feb 13 15:26:13.560279 containerd[1605]: time="2025-02-13T15:26:13.560255339Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:26:13.560356 containerd[1605]: time="2025-02-13T15:26:13.560340133Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:26:13.560405 containerd[1605]: time="2025-02-13T15:26:13.560355543Z" level=info msg="StopPodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:26:13.560637 containerd[1605]: time="2025-02-13T15:26:13.560615255Z" level=info msg="RemovePodSandbox for \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:26:13.560701 containerd[1605]: time="2025-02-13T15:26:13.560639131Z" level=info msg="Forcibly stopping sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\"" Feb 13 15:26:13.560756 containerd[1605]: time="2025-02-13T15:26:13.560716721Z" level=info msg="TearDown network for sandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" successfully" Feb 13 15:26:13.564784 containerd[1605]: time="2025-02-13T15:26:13.564759808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.564850 containerd[1605]: time="2025-02-13T15:26:13.564802420Z" level=info msg="RemovePodSandbox \"dd9fb28ddc05d35544857b522806ea0e9da9b09bde50dfe7ff0d6ec6375a5d2e\" returns successfully" Feb 13 15:26:13.565059 containerd[1605]: time="2025-02-13T15:26:13.565031854Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:26:13.565134 containerd[1605]: time="2025-02-13T15:26:13.565117340Z" level=info msg="TearDown network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" successfully" Feb 13 15:26:13.565134 containerd[1605]: time="2025-02-13T15:26:13.565128230Z" level=info msg="StopPodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" returns successfully" Feb 13 15:26:13.565327 containerd[1605]: time="2025-02-13T15:26:13.565311625Z" level=info msg="RemovePodSandbox for \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:26:13.565327 containerd[1605]: time="2025-02-13T15:26:13.565326975Z" level=info msg="Forcibly stopping sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\"" Feb 13 15:26:13.565416 containerd[1605]: time="2025-02-13T15:26:13.565388092Z" level=info msg="TearDown network for sandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" successfully" Feb 13 15:26:13.569386 containerd[1605]: time="2025-02-13T15:26:13.569339262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.569454 containerd[1605]: time="2025-02-13T15:26:13.569391613Z" level=info msg="RemovePodSandbox \"5583a872dcee6f68fdb447a4a5f7bbdfae9fc6f25f84f9669daf5a4df453af24\" returns successfully" Feb 13 15:26:13.569653 containerd[1605]: time="2025-02-13T15:26:13.569625315Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" Feb 13 15:26:13.569750 containerd[1605]: time="2025-02-13T15:26:13.569722352Z" level=info msg="TearDown network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" successfully" Feb 13 15:26:13.569750 containerd[1605]: time="2025-02-13T15:26:13.569738864Z" level=info msg="StopPodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" returns successfully" Feb 13 15:26:13.569992 containerd[1605]: time="2025-02-13T15:26:13.569961845Z" level=info msg="RemovePodSandbox for \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" Feb 13 15:26:13.570039 containerd[1605]: time="2025-02-13T15:26:13.569993557Z" level=info msg="Forcibly stopping sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\"" Feb 13 15:26:13.570116 containerd[1605]: time="2025-02-13T15:26:13.570078701Z" level=info msg="TearDown network for sandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" successfully" Feb 13 15:26:13.574334 containerd[1605]: time="2025-02-13T15:26:13.574279784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.574334 containerd[1605]: time="2025-02-13T15:26:13.574343457Z" level=info msg="RemovePodSandbox \"0978a9001465177a1f6e0c81283ebc70f1f7e83c47af6ffec681b1ff7fc1da1c\" returns successfully" Feb 13 15:26:13.574778 containerd[1605]: time="2025-02-13T15:26:13.574750584Z" level=info msg="StopPodSandbox for \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\"" Feb 13 15:26:13.574932 containerd[1605]: time="2025-02-13T15:26:13.574874844Z" level=info msg="TearDown network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" successfully" Feb 13 15:26:13.574932 containerd[1605]: time="2025-02-13T15:26:13.574929680Z" level=info msg="StopPodSandbox for \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" returns successfully" Feb 13 15:26:13.575235 containerd[1605]: time="2025-02-13T15:26:13.575126611Z" level=info msg="RemovePodSandbox for \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\"" Feb 13 15:26:13.575235 containerd[1605]: time="2025-02-13T15:26:13.575177440Z" level=info msg="Forcibly stopping sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\"" Feb 13 15:26:13.575332 containerd[1605]: time="2025-02-13T15:26:13.575262785Z" level=info msg="TearDown network for sandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" successfully" Feb 13 15:26:13.582100 containerd[1605]: time="2025-02-13T15:26:13.582060036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.582250 containerd[1605]: time="2025-02-13T15:26:13.582118359Z" level=info msg="RemovePodSandbox \"b20fa285650f2aacadf456f40ea503b59bd694d0249dd91e9369bd1c0c067bd6\" returns successfully" Feb 13 15:26:13.582492 containerd[1605]: time="2025-02-13T15:26:13.582471612Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:26:13.582755 containerd[1605]: time="2025-02-13T15:26:13.582698841Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:26:13.582755 containerd[1605]: time="2025-02-13T15:26:13.582749730Z" level=info msg="StopPodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:26:13.583035 containerd[1605]: time="2025-02-13T15:26:13.583007197Z" level=info msg="RemovePodSandbox for \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:26:13.583092 containerd[1605]: time="2025-02-13T15:26:13.583035312Z" level=info msg="Forcibly stopping sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\"" Feb 13 15:26:13.583198 containerd[1605]: time="2025-02-13T15:26:13.583111168Z" level=info msg="TearDown network for sandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" successfully" Feb 13 15:26:13.587185 containerd[1605]: time="2025-02-13T15:26:13.587153754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.587266 containerd[1605]: time="2025-02-13T15:26:13.587209323Z" level=info msg="RemovePodSandbox \"2a4c22e0818ca376c75c52ed323c31644172a4c73e7c429acf28fe9631abaf84\" returns successfully" Feb 13 15:26:13.587582 containerd[1605]: time="2025-02-13T15:26:13.587561293Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:26:13.587690 containerd[1605]: time="2025-02-13T15:26:13.587654743Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:26:13.587717 containerd[1605]: time="2025-02-13T15:26:13.587689881Z" level=info msg="StopPodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:26:13.587894 containerd[1605]: time="2025-02-13T15:26:13.587878886Z" level=info msg="RemovePodSandbox for \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:26:13.587926 containerd[1605]: time="2025-02-13T15:26:13.587896490Z" level=info msg="Forcibly stopping sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\"" Feb 13 15:26:13.588007 containerd[1605]: time="2025-02-13T15:26:13.587963570Z" level=info msg="TearDown network for sandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" successfully" Feb 13 15:26:13.591639 containerd[1605]: time="2025-02-13T15:26:13.591612175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.591740 containerd[1605]: time="2025-02-13T15:26:13.591663274Z" level=info msg="RemovePodSandbox \"698f03c034ffa66c567363c3bc195e9c0fcde9f7918f784893ed7f72811797e0\" returns successfully" Feb 13 15:26:13.591931 containerd[1605]: time="2025-02-13T15:26:13.591914829Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:26:13.592024 containerd[1605]: time="2025-02-13T15:26:13.592007559Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:26:13.592024 containerd[1605]: time="2025-02-13T15:26:13.592021506Z" level=info msg="StopPodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:26:13.592240 containerd[1605]: time="2025-02-13T15:26:13.592223526Z" level=info msg="RemovePodSandbox for \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:26:13.592240 containerd[1605]: time="2025-02-13T15:26:13.592239888Z" level=info msg="Forcibly stopping sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\"" Feb 13 15:26:13.592349 containerd[1605]: time="2025-02-13T15:26:13.592303171Z" level=info msg="TearDown network for sandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" successfully" Feb 13 15:26:13.595921 containerd[1605]: time="2025-02-13T15:26:13.595890918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.596003 containerd[1605]: time="2025-02-13T15:26:13.595936976Z" level=info msg="RemovePodSandbox \"ecf63658c40dcbbc5bbf8bb29b11f5dea3f3622ec68895ea6965dea7526b6d3a\" returns successfully" Feb 13 15:26:13.596267 containerd[1605]: time="2025-02-13T15:26:13.596242848Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:26:13.596342 containerd[1605]: time="2025-02-13T15:26:13.596324275Z" level=info msg="TearDown network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" successfully" Feb 13 15:26:13.596342 containerd[1605]: time="2025-02-13T15:26:13.596336659Z" level=info msg="StopPodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" returns successfully" Feb 13 15:26:13.596553 containerd[1605]: time="2025-02-13T15:26:13.596520464Z" level=info msg="RemovePodSandbox for \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:26:13.596553 containerd[1605]: time="2025-02-13T15:26:13.596547707Z" level=info msg="Forcibly stopping sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\"" Feb 13 15:26:13.596659 containerd[1605]: time="2025-02-13T15:26:13.596617843Z" level=info msg="TearDown network for sandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" successfully" Feb 13 15:26:13.600158 containerd[1605]: time="2025-02-13T15:26:13.600116236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.600217 containerd[1605]: time="2025-02-13T15:26:13.600176153Z" level=info msg="RemovePodSandbox \"ad44972d98a5136b7f397c2f7d4c83f42f270be8970a73b3e50f990940ed857a\" returns successfully" Feb 13 15:26:13.600468 containerd[1605]: time="2025-02-13T15:26:13.600447688Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" Feb 13 15:26:13.600579 containerd[1605]: time="2025-02-13T15:26:13.600555686Z" level=info msg="TearDown network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" successfully" Feb 13 15:26:13.600579 containerd[1605]: time="2025-02-13T15:26:13.600573701Z" level=info msg="StopPodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" returns successfully" Feb 13 15:26:13.600769 containerd[1605]: time="2025-02-13T15:26:13.600749611Z" level=info msg="RemovePodSandbox for \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" Feb 13 15:26:13.600801 containerd[1605]: time="2025-02-13T15:26:13.600773508Z" level=info msg="Forcibly stopping sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\"" Feb 13 15:26:13.600875 containerd[1605]: time="2025-02-13T15:26:13.600845497Z" level=info msg="TearDown network for sandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" successfully" Feb 13 15:26:13.604454 containerd[1605]: time="2025-02-13T15:26:13.604427412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.604509 containerd[1605]: time="2025-02-13T15:26:13.604467920Z" level=info msg="RemovePodSandbox \"379f43c9f035028c08f81bbb10d1193e10409377c9da884833b685efd702a375\" returns successfully" Feb 13 15:26:13.604682 containerd[1605]: time="2025-02-13T15:26:13.604658018Z" level=info msg="StopPodSandbox for \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\"" Feb 13 15:26:13.604754 containerd[1605]: time="2025-02-13T15:26:13.604739104Z" level=info msg="TearDown network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" successfully" Feb 13 15:26:13.604754 containerd[1605]: time="2025-02-13T15:26:13.604750607Z" level=info msg="StopPodSandbox for \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" returns successfully" Feb 13 15:26:13.604958 containerd[1605]: time="2025-02-13T15:26:13.604937809Z" level=info msg="RemovePodSandbox for \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\"" Feb 13 15:26:13.604958 containerd[1605]: time="2025-02-13T15:26:13.604956084Z" level=info msg="Forcibly stopping sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\"" Feb 13 15:26:13.605051 containerd[1605]: time="2025-02-13T15:26:13.605023845Z" level=info msg="TearDown network for sandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" successfully" Feb 13 15:26:13.608384 containerd[1605]: time="2025-02-13T15:26:13.608364444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.608442 containerd[1605]: time="2025-02-13T15:26:13.608393580Z" level=info msg="RemovePodSandbox \"ef33f2318d4c9c858bddbd56e7f5cb4a19c3fe40c4fd5aff84f6ed015b16d2f4\" returns successfully" Feb 13 15:26:13.608657 containerd[1605]: time="2025-02-13T15:26:13.608632462Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:26:13.608719 containerd[1605]: time="2025-02-13T15:26:13.608706184Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:26:13.608743 containerd[1605]: time="2025-02-13T15:26:13.608717807Z" level=info msg="StopPodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:26:13.608893 containerd[1605]: time="2025-02-13T15:26:13.608877385Z" level=info msg="RemovePodSandbox for \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:26:13.608947 containerd[1605]: time="2025-02-13T15:26:13.608894779Z" level=info msg="Forcibly stopping sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\"" Feb 13 15:26:13.608978 containerd[1605]: time="2025-02-13T15:26:13.608951919Z" level=info msg="TearDown network for sandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" successfully" Feb 13 15:26:13.612251 containerd[1605]: time="2025-02-13T15:26:13.612221932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.612251 containerd[1605]: time="2025-02-13T15:26:13.612250898Z" level=info msg="RemovePodSandbox \"215e017d35f55adfd0c3d2ed54da9bec5463bb4fc4334d91880e7007f4ab51a5\" returns successfully" Feb 13 15:26:13.612464 containerd[1605]: time="2025-02-13T15:26:13.612434433Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:26:13.612550 containerd[1605]: time="2025-02-13T15:26:13.612529626Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:26:13.612550 containerd[1605]: time="2025-02-13T15:26:13.612546900Z" level=info msg="StopPodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:26:13.612764 containerd[1605]: time="2025-02-13T15:26:13.612746606Z" level=info msg="RemovePodSandbox for \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:26:13.612764 containerd[1605]: time="2025-02-13T15:26:13.612764220Z" level=info msg="Forcibly stopping sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\"" Feb 13 15:26:13.612844 containerd[1605]: time="2025-02-13T15:26:13.612822402Z" level=info msg="TearDown network for sandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" successfully" Feb 13 15:26:13.616393 containerd[1605]: time="2025-02-13T15:26:13.616368760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.616468 containerd[1605]: time="2025-02-13T15:26:13.616399970Z" level=info msg="RemovePodSandbox \"40a25ccedac72540826e15d3aad94186739548e239e74752e59876056f5cf491\" returns successfully" Feb 13 15:26:13.616649 containerd[1605]: time="2025-02-13T15:26:13.616608623Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:26:13.616723 containerd[1605]: time="2025-02-13T15:26:13.616704629Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:26:13.616766 containerd[1605]: time="2025-02-13T15:26:13.616721110Z" level=info msg="StopPodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:26:13.616912 containerd[1605]: time="2025-02-13T15:26:13.616894025Z" level=info msg="RemovePodSandbox for \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:26:13.616912 containerd[1605]: time="2025-02-13T15:26:13.616911749Z" level=info msg="Forcibly stopping sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\"" Feb 13 15:26:13.617004 containerd[1605]: time="2025-02-13T15:26:13.616970754Z" level=info msg="TearDown network for sandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" successfully" Feb 13 15:26:13.620173 containerd[1605]: time="2025-02-13T15:26:13.620153517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.620224 containerd[1605]: time="2025-02-13T15:26:13.620181410Z" level=info msg="RemovePodSandbox \"ccc7367a6ae7de9dbf8b8d909308827c9f9cc22b4fb50db4988a72c36e72c17b\" returns successfully" Feb 13 15:26:13.620396 containerd[1605]: time="2025-02-13T15:26:13.620374414Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:26:13.620458 containerd[1605]: time="2025-02-13T15:26:13.620445562Z" level=info msg="TearDown network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" successfully" Feb 13 15:26:13.620482 containerd[1605]: time="2025-02-13T15:26:13.620456733Z" level=info msg="StopPodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" returns successfully" Feb 13 15:26:13.620677 containerd[1605]: time="2025-02-13T15:26:13.620657461Z" level=info msg="RemovePodSandbox for \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:26:13.620719 containerd[1605]: time="2025-02-13T15:26:13.620680064Z" level=info msg="Forcibly stopping sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\"" Feb 13 15:26:13.620775 containerd[1605]: time="2025-02-13T15:26:13.620751814Z" level=info msg="TearDown network for sandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" successfully" Feb 13 15:26:13.624495 containerd[1605]: time="2025-02-13T15:26:13.624463580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.624552 containerd[1605]: time="2025-02-13T15:26:13.624520270Z" level=info msg="RemovePodSandbox \"52041564042adf13fb0661345c69e8659ee889f3d826842c457735d5671bb953\" returns successfully" Feb 13 15:26:13.624758 containerd[1605]: time="2025-02-13T15:26:13.624736658Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" Feb 13 15:26:13.624843 containerd[1605]: time="2025-02-13T15:26:13.624825390Z" level=info msg="TearDown network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" successfully" Feb 13 15:26:13.624843 containerd[1605]: time="2025-02-13T15:26:13.624837883Z" level=info msg="StopPodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" returns successfully" Feb 13 15:26:13.625067 containerd[1605]: time="2025-02-13T15:26:13.625038041Z" level=info msg="RemovePodSandbox for \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" Feb 13 15:26:13.625067 containerd[1605]: time="2025-02-13T15:26:13.625063039Z" level=info msg="Forcibly stopping sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\"" Feb 13 15:26:13.625208 containerd[1605]: time="2025-02-13T15:26:13.625177440Z" level=info msg="TearDown network for sandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" successfully" Feb 13 15:26:13.628565 containerd[1605]: time="2025-02-13T15:26:13.628538920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.628605 containerd[1605]: time="2025-02-13T15:26:13.628577755Z" level=info msg="RemovePodSandbox \"6a9a3cef84914d27de0c14025bf67b7c611a0024995fcceeb3b589602249a284\" returns successfully" Feb 13 15:26:13.628834 containerd[1605]: time="2025-02-13T15:26:13.628815304Z" level=info msg="StopPodSandbox for \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\"" Feb 13 15:26:13.628917 containerd[1605]: time="2025-02-13T15:26:13.628902152Z" level=info msg="TearDown network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" successfully" Feb 13 15:26:13.628940 containerd[1605]: time="2025-02-13T15:26:13.628916881Z" level=info msg="StopPodSandbox for \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" returns successfully" Feb 13 15:26:13.629113 containerd[1605]: time="2025-02-13T15:26:13.629096267Z" level=info msg="RemovePodSandbox for \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\"" Feb 13 15:26:13.629158 containerd[1605]: time="2025-02-13T15:26:13.629118409Z" level=info msg="Forcibly stopping sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\"" Feb 13 15:26:13.629239 containerd[1605]: time="2025-02-13T15:26:13.629202562Z" level=info msg="TearDown network for sandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" successfully" Feb 13 15:26:13.632959 containerd[1605]: time="2025-02-13T15:26:13.632821680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.632959 containerd[1605]: time="2025-02-13T15:26:13.632892106Z" level=info msg="RemovePodSandbox \"5130eccd66c8efaab80a624a5b8ab5efaf214fb5016e6464dfd7ed8024145645\" returns successfully" Feb 13 15:26:13.633221 containerd[1605]: time="2025-02-13T15:26:13.633193729Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:26:13.633331 containerd[1605]: time="2025-02-13T15:26:13.633299463Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:26:13.633331 containerd[1605]: time="2025-02-13T15:26:13.633317699Z" level=info msg="StopPodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:26:13.634175 containerd[1605]: time="2025-02-13T15:26:13.633584244Z" level=info msg="RemovePodSandbox for \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:26:13.634175 containerd[1605]: time="2025-02-13T15:26:13.633604834Z" level=info msg="Forcibly stopping sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\"" Feb 13 15:26:13.634175 containerd[1605]: time="2025-02-13T15:26:13.633665251Z" level=info msg="TearDown network for sandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" successfully" Feb 13 15:26:13.637707 containerd[1605]: time="2025-02-13T15:26:13.637670014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.637806 containerd[1605]: time="2025-02-13T15:26:13.637729048Z" level=info msg="RemovePodSandbox \"ca83863bdf572c3ffafcf99fededd266a7f731caff625c48ebb7a61d211aec74\" returns successfully" Feb 13 15:26:13.638068 containerd[1605]: time="2025-02-13T15:26:13.638047143Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:26:13.638172 containerd[1605]: time="2025-02-13T15:26:13.638155873Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:26:13.638214 containerd[1605]: time="2025-02-13T15:26:13.638172355Z" level=info msg="StopPodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:26:13.638434 containerd[1605]: time="2025-02-13T15:26:13.638411858Z" level=info msg="RemovePodSandbox for \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:26:13.638497 containerd[1605]: time="2025-02-13T15:26:13.638436245Z" level=info msg="Forcibly stopping sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\"" Feb 13 15:26:13.638576 containerd[1605]: time="2025-02-13T15:26:13.638522041Z" level=info msg="TearDown network for sandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" successfully" Feb 13 15:26:13.642071 containerd[1605]: time="2025-02-13T15:26:13.642040373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.642120 containerd[1605]: time="2025-02-13T15:26:13.642077846Z" level=info msg="RemovePodSandbox \"ba84b7afe9ab7e5a05d07daff49730277d4d794536e34d9ac5c4944957e81de4\" returns successfully" Feb 13 15:26:13.642379 containerd[1605]: time="2025-02-13T15:26:13.642358668Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:26:13.642464 containerd[1605]: time="2025-02-13T15:26:13.642445798Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:26:13.642499 containerd[1605]: time="2025-02-13T15:26:13.642462740Z" level=info msg="StopPodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:26:13.642679 containerd[1605]: time="2025-02-13T15:26:13.642659350Z" level=info msg="RemovePodSandbox for \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:26:13.642731 containerd[1605]: time="2025-02-13T15:26:13.642682855Z" level=info msg="Forcibly stopping sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\"" Feb 13 15:26:13.642805 containerd[1605]: time="2025-02-13T15:26:13.642764253Z" level=info msg="TearDown network for sandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" successfully" Feb 13 15:26:13.646348 containerd[1605]: time="2025-02-13T15:26:13.646321901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.646394 containerd[1605]: time="2025-02-13T15:26:13.646359864Z" level=info msg="RemovePodSandbox \"f603d101f71af7c78dbf789ff50c9bcb13fef6c0994744ba50fe6cadd66a9a5f\" returns successfully" Feb 13 15:26:13.646645 containerd[1605]: time="2025-02-13T15:26:13.646608375Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:26:13.646731 containerd[1605]: time="2025-02-13T15:26:13.646713238Z" level=info msg="TearDown network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" successfully" Feb 13 15:26:13.646731 containerd[1605]: time="2025-02-13T15:26:13.646727937Z" level=info msg="StopPodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" returns successfully" Feb 13 15:26:13.646912 containerd[1605]: time="2025-02-13T15:26:13.646891082Z" level=info msg="RemovePodSandbox for \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:26:13.646946 containerd[1605]: time="2025-02-13T15:26:13.646914116Z" level=info msg="Forcibly stopping sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\"" Feb 13 15:26:13.647003 containerd[1605]: time="2025-02-13T15:26:13.646974263Z" level=info msg="TearDown network for sandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" successfully" Feb 13 15:26:13.650477 containerd[1605]: time="2025-02-13T15:26:13.650450794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.650558 containerd[1605]: time="2025-02-13T15:26:13.650496563Z" level=info msg="RemovePodSandbox \"654b1153f3a30c6129d2e8d5b806573b6435f0ea1f47cf563dc2f6fef8c8150f\" returns successfully" Feb 13 15:26:13.650726 containerd[1605]: time="2025-02-13T15:26:13.650704124Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" Feb 13 15:26:13.650826 containerd[1605]: time="2025-02-13T15:26:13.650805941Z" level=info msg="TearDown network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" successfully" Feb 13 15:26:13.650826 containerd[1605]: time="2025-02-13T15:26:13.650822493Z" level=info msg="StopPodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" returns successfully" Feb 13 15:26:13.651023 containerd[1605]: time="2025-02-13T15:26:13.651000747Z" level=info msg="RemovePodSandbox for \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" Feb 13 15:26:13.651023 containerd[1605]: time="2025-02-13T15:26:13.651021437Z" level=info msg="Forcibly stopping sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\"" Feb 13 15:26:13.651113 containerd[1605]: time="2025-02-13T15:26:13.651086653Z" level=info msg="TearDown network for sandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" successfully" Feb 13 15:26:13.654390 containerd[1605]: time="2025-02-13T15:26:13.654367897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.654448 containerd[1605]: time="2025-02-13T15:26:13.654407394Z" level=info msg="RemovePodSandbox \"a1d45a49540fe0614902dac855f99963f57c51aa62ccaf08e90eae0542053abc\" returns successfully" Feb 13 15:26:13.654728 containerd[1605]: time="2025-02-13T15:26:13.654607912Z" level=info msg="StopPodSandbox for \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\"" Feb 13 15:26:13.654728 containerd[1605]: time="2025-02-13T15:26:13.654678668Z" level=info msg="TearDown network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" successfully" Feb 13 15:26:13.654728 containerd[1605]: time="2025-02-13T15:26:13.654689730Z" level=info msg="StopPodSandbox for \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" returns successfully" Feb 13 15:26:13.654922 containerd[1605]: time="2025-02-13T15:26:13.654902581Z" level=info msg="RemovePodSandbox for \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\"" Feb 13 15:26:13.654922 containerd[1605]: time="2025-02-13T15:26:13.654925455Z" level=info msg="Forcibly stopping sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\"" Feb 13 15:26:13.655030 containerd[1605]: time="2025-02-13T15:26:13.655000651Z" level=info msg="TearDown network for sandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" successfully" Feb 13 15:26:13.658310 containerd[1605]: time="2025-02-13T15:26:13.658285712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.658357 containerd[1605]: time="2025-02-13T15:26:13.658321511Z" level=info msg="RemovePodSandbox \"e474216b722e898898b5c2d8d13eeb0c72d15277541298cbb3f6aa4787c09c85\" returns successfully" Feb 13 15:26:13.658579 containerd[1605]: time="2025-02-13T15:26:13.658546446Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:26:13.658635 containerd[1605]: time="2025-02-13T15:26:13.658621281Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:26:13.658635 containerd[1605]: time="2025-02-13T15:26:13.658630329Z" level=info msg="StopPodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:26:13.658845 containerd[1605]: time="2025-02-13T15:26:13.658827229Z" level=info msg="RemovePodSandbox for \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:26:13.658845 containerd[1605]: time="2025-02-13T15:26:13.658843971Z" level=info msg="Forcibly stopping sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\"" Feb 13 15:26:13.658927 containerd[1605]: time="2025-02-13T15:26:13.658902826Z" level=info msg="TearDown network for sandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" successfully" Feb 13 15:26:13.662304 containerd[1605]: time="2025-02-13T15:26:13.662284844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.662371 containerd[1605]: time="2025-02-13T15:26:13.662313119Z" level=info msg="RemovePodSandbox \"524dadcd08877231e533fc2b59c56665274cdd4899c72bf535c663c0de2cd360\" returns successfully" Feb 13 15:26:13.662535 containerd[1605]: time="2025-02-13T15:26:13.662515180Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:26:13.662623 containerd[1605]: time="2025-02-13T15:26:13.662607558Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:26:13.662651 containerd[1605]: time="2025-02-13T15:26:13.662622848Z" level=info msg="StopPodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:26:13.662859 containerd[1605]: time="2025-02-13T15:26:13.662828155Z" level=info msg="RemovePodSandbox for \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:26:13.662859 containerd[1605]: time="2025-02-13T15:26:13.662852683Z" level=info msg="Forcibly stopping sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\"" Feb 13 15:26:13.662965 containerd[1605]: time="2025-02-13T15:26:13.662927296Z" level=info msg="TearDown network for sandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" successfully" Feb 13 15:26:13.666331 containerd[1605]: time="2025-02-13T15:26:13.666300769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.666371 containerd[1605]: time="2025-02-13T15:26:13.666340215Z" level=info msg="RemovePodSandbox \"2ed7b248fec16fddfef8f9231243356ad420e9c91c9990706fb3ba9b730edaa0\" returns successfully" Feb 13 15:26:13.666587 containerd[1605]: time="2025-02-13T15:26:13.666562134Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:26:13.666670 containerd[1605]: time="2025-02-13T15:26:13.666650003Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:26:13.666670 containerd[1605]: time="2025-02-13T15:26:13.666665153Z" level=info msg="StopPodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:26:13.666880 containerd[1605]: time="2025-02-13T15:26:13.666859559Z" level=info msg="RemovePodSandbox for \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:26:13.666880 containerd[1605]: time="2025-02-13T15:26:13.666880850Z" level=info msg="Forcibly stopping sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\"" Feb 13 15:26:13.666989 containerd[1605]: time="2025-02-13T15:26:13.666954442Z" level=info msg="TearDown network for sandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" successfully" Feb 13 15:26:13.670255 containerd[1605]: time="2025-02-13T15:26:13.670226047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.670324 containerd[1605]: time="2025-02-13T15:26:13.670261657Z" level=info msg="RemovePodSandbox \"54c27390fb46978c140628c975cc367ddba568f5a67a3d1f30e8c19a325a3666\" returns successfully" Feb 13 15:26:13.670500 containerd[1605]: time="2025-02-13T15:26:13.670459519Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:26:13.670572 containerd[1605]: time="2025-02-13T15:26:13.670562488Z" level=info msg="TearDown network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" successfully" Feb 13 15:26:13.670606 containerd[1605]: time="2025-02-13T15:26:13.670575854Z" level=info msg="StopPodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" returns successfully" Feb 13 15:26:13.670769 containerd[1605]: time="2025-02-13T15:26:13.670751654Z" level=info msg="RemovePodSandbox for \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:26:13.670812 containerd[1605]: time="2025-02-13T15:26:13.670771082Z" level=info msg="Forcibly stopping sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\"" Feb 13 15:26:13.670874 containerd[1605]: time="2025-02-13T15:26:13.670856917Z" level=info msg="TearDown network for sandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" successfully" Feb 13 15:26:13.674253 containerd[1605]: time="2025-02-13T15:26:13.674217976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.674319 containerd[1605]: time="2025-02-13T15:26:13.674258414Z" level=info msg="RemovePodSandbox \"b65b83d166a7a71ed2cc83add87882ef4cc9eda8211b884cdc3ff4733deffc96\" returns successfully" Feb 13 15:26:13.674556 containerd[1605]: time="2025-02-13T15:26:13.674524598Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" Feb 13 15:26:13.674670 containerd[1605]: time="2025-02-13T15:26:13.674646315Z" level=info msg="TearDown network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" successfully" Feb 13 15:26:13.674670 containerd[1605]: time="2025-02-13T15:26:13.674664520Z" level=info msg="StopPodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" returns successfully" Feb 13 15:26:13.674902 containerd[1605]: time="2025-02-13T15:26:13.674873614Z" level=info msg="RemovePodSandbox for \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" Feb 13 15:26:13.674902 containerd[1605]: time="2025-02-13T15:26:13.674895877Z" level=info msg="Forcibly stopping sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\"" Feb 13 15:26:13.675000 containerd[1605]: time="2025-02-13T15:26:13.674970681Z" level=info msg="TearDown network for sandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" successfully" Feb 13 15:26:13.678220 containerd[1605]: time="2025-02-13T15:26:13.678189986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.678285 containerd[1605]: time="2025-02-13T15:26:13.678228230Z" level=info msg="RemovePodSandbox \"7d135632d96a290210ec5103f1755cd5346ea66fbe013b5ec1d787eb344d6a5e\" returns successfully" Feb 13 15:26:13.678464 containerd[1605]: time="2025-02-13T15:26:13.678441652Z" level=info msg="StopPodSandbox for \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\"" Feb 13 15:26:13.678553 containerd[1605]: time="2025-02-13T15:26:13.678536916Z" level=info msg="TearDown network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" successfully" Feb 13 15:26:13.678583 containerd[1605]: time="2025-02-13T15:26:13.678552296Z" level=info msg="StopPodSandbox for \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" returns successfully" Feb 13 15:26:13.678801 containerd[1605]: time="2025-02-13T15:26:13.678781469Z" level=info msg="RemovePodSandbox for \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\"" Feb 13 15:26:13.678852 containerd[1605]: time="2025-02-13T15:26:13.678805817Z" level=info msg="Forcibly stopping sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\"" Feb 13 15:26:13.678910 containerd[1605]: time="2025-02-13T15:26:13.678879679Z" level=info msg="TearDown network for sandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" successfully" Feb 13 15:26:13.682607 containerd[1605]: time="2025-02-13T15:26:13.682575405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:26:13.682681 containerd[1605]: time="2025-02-13T15:26:13.682624840Z" level=info msg="RemovePodSandbox \"f034afbcee4f5354bc4514e2b3e264c95008db2cdf52eaf1d6875e7b167cd99e\" returns successfully" Feb 13 15:26:15.674677 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Feb 13 15:26:15.734452 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:15.737603 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:15.742405 systemd-logind[1588]: New session 14 of user core. Feb 13 15:26:15.749686 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:26:15.901413 sshd[6170]: Connection closed by 10.0.0.1 port 57756 Feb 13 15:26:15.901792 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:15.905648 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:57756.service: Deactivated successfully. Feb 13 15:26:15.909020 systemd-logind[1588]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:26:15.909629 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:26:15.910516 systemd-logind[1588]: Removed session 14. Feb 13 15:26:20.917522 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:57760.service - OpenSSH per-connection server daemon (10.0.0.1:57760). Feb 13 15:26:20.966217 sshd[6229]: Accepted publickey for core from 10.0.0.1 port 57760 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:20.968169 sshd-session[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:20.972498 systemd-logind[1588]: New session 15 of user core. Feb 13 15:26:20.977448 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:26:21.104425 sshd[6232]: Connection closed by 10.0.0.1 port 57760 Feb 13 15:26:21.106381 sshd-session[6229]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:21.110758 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:57760.service: Deactivated successfully. Feb 13 15:26:21.113345 systemd-logind[1588]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:26:21.113471 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:26:21.114912 systemd-logind[1588]: Removed session 15. Feb 13 15:26:26.120380 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:59700.service - OpenSSH per-connection server daemon (10.0.0.1:59700). Feb 13 15:26:26.158059 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 59700 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:26.159560 sshd-session[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:26.163648 systemd-logind[1588]: New session 16 of user core. Feb 13 15:26:26.170393 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:26:26.283843 sshd[6247]: Connection closed by 10.0.0.1 port 59700 Feb 13 15:26:26.284205 sshd-session[6244]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:26.295431 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:59710.service - OpenSSH per-connection server daemon (10.0.0.1:59710). Feb 13 15:26:26.296079 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:59700.service: Deactivated successfully. Feb 13 15:26:26.298107 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:26:26.300009 systemd-logind[1588]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:26:26.301004 systemd-logind[1588]: Removed session 16. Feb 13 15:26:26.336850 sshd[6256]: Accepted publickey for core from 10.0.0.1 port 59710 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:26.338536 sshd-session[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:26.342822 systemd-logind[1588]: New session 17 of user core. Feb 13 15:26:26.353406 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:26:27.530844 sshd[6262]: Connection closed by 10.0.0.1 port 59710 Feb 13 15:26:27.534505 sshd-session[6256]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:27.561801 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:59718.service - OpenSSH per-connection server daemon (10.0.0.1:59718). Feb 13 15:26:27.563378 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:59710.service: Deactivated successfully. Feb 13 15:26:27.566989 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:26:27.569688 systemd-logind[1588]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:26:27.571080 systemd-logind[1588]: Removed session 17. Feb 13 15:26:27.652877 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 59718 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:27.655629 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:27.668894 systemd-logind[1588]: New session 18 of user core. Feb 13 15:26:27.678601 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:26:29.406832 sshd[6275]: Connection closed by 10.0.0.1 port 59718 Feb 13 15:26:29.408239 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:29.417526 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:59726.service - OpenSSH per-connection server daemon (10.0.0.1:59726). Feb 13 15:26:29.418254 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:59718.service: Deactivated successfully. Feb 13 15:26:29.431396 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:26:29.434684 systemd-logind[1588]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:26:29.437125 systemd-logind[1588]: Removed session 18. Feb 13 15:26:29.464170 sshd[6290]: Accepted publickey for core from 10.0.0.1 port 59726 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:29.465943 sshd-session[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:29.470407 systemd-logind[1588]: New session 19 of user core. Feb 13 15:26:29.477411 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:26:29.693197 sshd[6298]: Connection closed by 10.0.0.1 port 59726 Feb 13 15:26:29.693798 sshd-session[6290]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:29.704641 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:59730.service - OpenSSH per-connection server daemon (10.0.0.1:59730). Feb 13 15:26:29.709238 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:59726.service: Deactivated successfully. Feb 13 15:26:29.711505 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:26:29.712331 systemd-logind[1588]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:26:29.713751 systemd-logind[1588]: Removed session 19. Feb 13 15:26:29.749436 sshd[6306]: Accepted publickey for core from 10.0.0.1 port 59730 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:29.751402 sshd-session[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:29.755853 systemd-logind[1588]: New session 20 of user core. Feb 13 15:26:29.771455 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:26:29.893163 sshd[6312]: Connection closed by 10.0.0.1 port 59730 Feb 13 15:26:29.893533 sshd-session[6306]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:29.897908 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:59730.service: Deactivated successfully. Feb 13 15:26:29.901163 systemd-logind[1588]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:26:29.901235 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:26:29.902475 systemd-logind[1588]: Removed session 20. Feb 13 15:26:32.494960 kubelet[2845]: E0213 15:26:32.494913 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:34.909400 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:32970.service - OpenSSH per-connection server daemon (10.0.0.1:32970). Feb 13 15:26:34.947214 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 32970 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:34.948754 sshd-session[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:34.953272 systemd-logind[1588]: New session 21 of user core. Feb 13 15:26:34.960636 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:26:35.072820 sshd[6330]: Connection closed by 10.0.0.1 port 32970 Feb 13 15:26:35.073202 sshd-session[6327]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:35.077129 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:32970.service: Deactivated successfully. Feb 13 15:26:35.079473 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:26:35.079582 systemd-logind[1588]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:26:35.080780 systemd-logind[1588]: Removed session 21. Feb 13 15:26:36.495304 kubelet[2845]: E0213 15:26:36.495264 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:36.495823 kubelet[2845]: E0213 15:26:36.495404 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:40.082364 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:32984.service - OpenSSH per-connection server daemon (10.0.0.1:32984). Feb 13 15:26:40.121478 sshd[6350]: Accepted publickey for core from 10.0.0.1 port 32984 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:40.123110 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:40.127714 systemd-logind[1588]: New session 22 of user core. Feb 13 15:26:40.136482 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:26:40.245351 sshd[6353]: Connection closed by 10.0.0.1 port 32984 Feb 13 15:26:40.245721 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:40.250311 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:32984.service: Deactivated successfully. Feb 13 15:26:40.253417 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:26:40.254223 systemd-logind[1588]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:26:40.255087 systemd-logind[1588]: Removed session 22. Feb 13 15:26:42.494355 kubelet[2845]: E0213 15:26:42.494302 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:45.257399 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:52050.service - OpenSSH per-connection server daemon (10.0.0.1:52050). Feb 13 15:26:45.299594 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 52050 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:45.301170 sshd-session[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:45.305186 systemd-logind[1588]: New session 23 of user core. Feb 13 15:26:45.315378 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:26:45.432168 sshd[6389]: Connection closed by 10.0.0.1 port 52050 Feb 13 15:26:45.432467 sshd-session[6386]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:45.436986 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:52050.service: Deactivated successfully. Feb 13 15:26:45.439411 systemd-logind[1588]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:26:45.439557 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:26:45.440704 systemd-logind[1588]: Removed session 23. Feb 13 15:26:46.163259 kubelet[2845]: E0213 15:26:46.163222 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:50.445350 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:52062.service - OpenSSH per-connection server daemon (10.0.0.1:52062). Feb 13 15:26:50.486598 sshd[6442]: Accepted publickey for core from 10.0.0.1 port 52062 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:26:50.488038 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:50.491674 systemd-logind[1588]: New session 24 of user core. Feb 13 15:26:50.498530 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:26:50.628422 sshd[6445]: Connection closed by 10.0.0.1 port 52062 Feb 13 15:26:50.628813 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:50.632651 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:52062.service: Deactivated successfully. Feb 13 15:26:50.634926 systemd-logind[1588]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:26:50.634976 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:26:50.636223 systemd-logind[1588]: Removed session 24. Feb 13 15:26:51.494821 kubelet[2845]: E0213 15:26:51.494784 2845 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"