Feb 13 15:41:23.906968 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:41:23.906991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:23.907002 kernel: BIOS-provided physical RAM map: Feb 13 15:41:23.907009 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:41:23.907015 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:41:23.907021 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:41:23.907029 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:41:23.907035 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:41:23.907042 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:41:23.907048 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:41:23.907054 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:41:23.907063 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:41:23.907070 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:41:23.907076 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:41:23.907084 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:41:23.907091 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:41:23.907100 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:41:23.907107 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:41:23.907113 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:41:23.907120 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:41:23.907127 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:41:23.907133 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:41:23.907140 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:41:23.907147 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:41:23.907154 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:41:23.907160 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:41:23.907167 kernel: NX (Execute Disable) protection: active Feb 13 15:41:23.907176 kernel: APIC: Static calls initialized Feb 13 15:41:23.907183 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:41:23.907190 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:41:23.907196 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:41:23.907203 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:41:23.907210 kernel: extended physical RAM map: Feb 13 15:41:23.907216 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:41:23.907223 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:41:23.907230 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:41:23.907237 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:41:23.907244 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:41:23.907250 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:41:23.907259 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:41:23.907270 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:41:23.907277 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:41:23.907284 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:41:23.907291 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:41:23.907298 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:41:23.907307 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:41:23.907328 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:41:23.907335 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:41:23.907342 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:41:23.907350 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:41:23.907357 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:41:23.907364 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:41:23.907371 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:41:23.907378 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:41:23.907388 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:41:23.907395 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:41:23.907402 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:41:23.907409 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:41:23.907416 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:41:23.907423 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:41:23.907430 kernel: efi: EFI v2.7 by EDK II Feb 13 15:41:23.907437 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:41:23.907444 kernel: random: crng init done Feb 13 15:41:23.907452 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:41:23.907459 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:41:23.907466 kernel: secureboot: Secure boot disabled Feb 13 15:41:23.907475 kernel: SMBIOS 2.8 present. Feb 13 15:41:23.907482 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:41:23.907490 kernel: Hypervisor detected: KVM Feb 13 15:41:23.907497 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:41:23.907504 kernel: kvm-clock: using sched offset of 2696994502 cycles Feb 13 15:41:23.907511 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:41:23.907519 kernel: tsc: Detected 2794.750 MHz processor Feb 13 15:41:23.907526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:41:23.907534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:41:23.907551 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:41:23.907561 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:41:23.907568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:41:23.907575 kernel: Using GB pages for direct mapping Feb 13 15:41:23.907583 kernel: ACPI: Early table checksum verification disabled Feb 13 15:41:23.907590 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:41:23.907597 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:41:23.907605 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907612 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907619 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:41:23.907629 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907636 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907644 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907651 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:23.907658 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:41:23.907665 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:41:23.907672 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:41:23.907680 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:41:23.907687 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:41:23.907697 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:41:23.907704 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:41:23.907711 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:41:23.907718 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:41:23.907725 kernel: No NUMA configuration found Feb 13 15:41:23.907732 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:41:23.907740 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:41:23.907747 kernel: Zone ranges: Feb 13 15:41:23.907754 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:41:23.907764 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:41:23.907771 kernel: Normal empty Feb 13 15:41:23.907778 kernel: Movable zone start for each node Feb 13 15:41:23.907785 kernel: Early memory node ranges Feb 13 15:41:23.907793 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:41:23.907800 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:41:23.907807 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:41:23.907814 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:41:23.907821 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:41:23.907831 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:41:23.907838 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:41:23.907845 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:41:23.907852 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:41:23.907859 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:41:23.907867 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:41:23.907882 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:41:23.907892 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:41:23.907899 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:41:23.907906 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:41:23.907914 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:41:23.907921 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:41:23.907931 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:41:23.907938 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:41:23.907946 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:41:23.907953 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:41:23.907961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:41:23.907971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:41:23.907978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:41:23.907986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:41:23.907993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:41:23.908001 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:41:23.908008 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:41:23.908016 kernel: TSC deadline timer available Feb 13 15:41:23.908023 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:41:23.908031 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:41:23.908041 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:41:23.908048 kernel: kvm-guest: setup PV sched yield Feb 13 15:41:23.908055 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:41:23.908063 kernel: Booting paravirtualized kernel on KVM Feb 13 15:41:23.908071 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:41:23.908078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:41:23.908086 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:41:23.908093 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:41:23.908101 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:41:23.908108 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:41:23.908119 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:41:23.908127 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:23.908135 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:41:23.908143 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:41:23.908150 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:41:23.908158 kernel: Fallback order for Node 0: 0 Feb 13 15:41:23.908166 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:41:23.908173 kernel: Policy zone: DMA32 Feb 13 15:41:23.908183 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:41:23.908191 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Feb 13 15:41:23.908198 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:41:23.908206 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:41:23.908213 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:41:23.908221 kernel: Dynamic Preempt: voluntary Feb 13 15:41:23.908228 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:41:23.908237 kernel: rcu: RCU event tracing is enabled. Feb 13 15:41:23.908244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:41:23.908255 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:41:23.908262 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:41:23.908270 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:41:23.908277 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:41:23.908285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:41:23.908292 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:41:23.908300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:41:23.908307 kernel: Console: colour dummy device 80x25 Feb 13 15:41:23.908338 kernel: printk: console [ttyS0] enabled Feb 13 15:41:23.908349 kernel: ACPI: Core revision 20230628 Feb 13 15:41:23.908357 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:41:23.908365 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:41:23.908372 kernel: x2apic enabled Feb 13 15:41:23.908380 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:41:23.908387 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:41:23.908395 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:41:23.908403 kernel: kvm-guest: setup PV IPIs Feb 13 15:41:23.908412 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:41:23.908422 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:41:23.908431 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 15:41:23.908440 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:41:23.908448 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:41:23.908458 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:41:23.908465 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:41:23.908473 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:41:23.908481 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:41:23.908488 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:41:23.908498 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:41:23.908506 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:41:23.908513 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:41:23.908521 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:41:23.908529 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:41:23.908548 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:41:23.908558 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:41:23.908568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:41:23.908582 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:41:23.908592 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:41:23.908601 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:41:23.908611 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:41:23.908621 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:41:23.908631 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:41:23.908640 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:41:23.908648 kernel: landlock: Up and running. Feb 13 15:41:23.908655 kernel: SELinux: Initializing. Feb 13 15:41:23.908666 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:41:23.908673 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:41:23.908681 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:41:23.908689 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:23.908696 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:23.908704 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:23.908712 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:41:23.908719 kernel: ... version: 0 Feb 13 15:41:23.908727 kernel: ... bit width: 48 Feb 13 15:41:23.908737 kernel: ... generic registers: 6 Feb 13 15:41:23.908744 kernel: ... value mask: 0000ffffffffffff Feb 13 15:41:23.908752 kernel: ... max period: 00007fffffffffff Feb 13 15:41:23.908759 kernel: ... fixed-purpose events: 0 Feb 13 15:41:23.908767 kernel: ... event mask: 000000000000003f Feb 13 15:41:23.908774 kernel: signal: max sigframe size: 1776 Feb 13 15:41:23.908782 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:41:23.908790 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:41:23.908797 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:41:23.908807 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:41:23.908814 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:41:23.908822 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:41:23.908829 kernel: smpboot: Max logical packages: 1 Feb 13 15:41:23.908837 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 15:41:23.908844 kernel: devtmpfs: initialized Feb 13 15:41:23.908852 kernel: x86/mm: Memory block size: 128MB Feb 13 15:41:23.908859 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:41:23.908867 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:41:23.908877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:41:23.908885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:41:23.908892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:41:23.908900 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:41:23.908907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:41:23.908915 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:41:23.908923 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:41:23.908930 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:41:23.908938 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:41:23.908948 kernel: audit: type=2000 audit(1739461282.937:1): state=initialized audit_enabled=0 res=1 Feb 13 15:41:23.908955 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:41:23.908963 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:41:23.908970 kernel: cpuidle: using governor menu Feb 13 15:41:23.908978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:41:23.908986 kernel: dca service started, version 1.12.1 Feb 13 15:41:23.908993 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:41:23.909001 kernel: PCI: Using configuration type 1 for base access Feb 13 15:41:23.909009 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:41:23.909019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:41:23.909026 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:41:23.909034 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:41:23.909042 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:41:23.909049 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:41:23.909057 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:41:23.909064 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:41:23.909072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:41:23.909079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:41:23.909089 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:41:23.909097 kernel: ACPI: Interpreter enabled Feb 13 15:41:23.909104 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:41:23.909112 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:41:23.909120 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:41:23.909128 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:41:23.909135 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:41:23.909143 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:41:23.909377 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:41:23.909552 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:41:23.909680 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:41:23.909690 kernel: PCI host bridge to bus 0000:00 Feb 13 15:41:23.909843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:41:23.909999 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:41:23.910151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:41:23.910352 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:41:23.910500 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:41:23.910674 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:41:23.910800 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:41:23.910948 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:41:23.911093 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:41:23.911231 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:41:23.911376 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:41:23.911507 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:41:23.911649 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:41:23.911776 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:41:23.911909 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:41:23.912034 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:41:23.912164 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:41:23.912288 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:41:23.912463 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:41:23.912629 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:41:23.912757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:41:23.912883 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:41:23.913015 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:41:23.913146 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:41:23.913272 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:41:23.913441 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:41:23.913577 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:41:23.913708 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:41:23.913830 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:41:23.913959 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:41:23.914086 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:41:23.914207 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:41:23.914363 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:41:23.914490 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:41:23.914501 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:41:23.914509 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:41:23.914516 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:41:23.914528 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:41:23.914547 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:41:23.914557 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:41:23.914565 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:41:23.914572 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:41:23.914580 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:41:23.914588 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:41:23.914595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:41:23.914603 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:41:23.914614 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:41:23.914622 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:41:23.914629 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:41:23.914637 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:41:23.914645 kernel: iommu: Default domain type: Translated Feb 13 15:41:23.914652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:41:23.914660 kernel: efivars: Registered efivars operations Feb 13 15:41:23.914667 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:41:23.914675 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:41:23.914685 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:41:23.914692 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:41:23.914700 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:41:23.914707 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:41:23.914715 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:41:23.914723 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:41:23.914730 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:41:23.914738 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:41:23.914866 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:41:23.914993 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:41:23.915118 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:41:23.915129 kernel: vgaarb: loaded Feb 13 15:41:23.915136 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:41:23.915144 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:41:23.915152 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:41:23.915159 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:41:23.915167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:41:23.915178 kernel: pnp: PnP ACPI init Feb 13 15:41:23.915349 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:41:23.915362 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:41:23.915370 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:41:23.915378 kernel: NET: Registered PF_INET protocol family Feb 13 15:41:23.915405 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:41:23.915416 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:41:23.915424 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:41:23.915434 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:41:23.915442 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:41:23.915450 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:41:23.915458 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:41:23.915466 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:41:23.915474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:41:23.915482 kernel: NET: Registered PF_XDP protocol family Feb 13 15:41:23.915621 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:41:23.915745 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:41:23.915861 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:41:23.915973 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:41:23.916126 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:41:23.916240 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:41:23.916437 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:41:23.916580 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:41:23.916593 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:41:23.916601 kernel: Initialise system trusted keyrings Feb 13 15:41:23.916613 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:41:23.916621 kernel: Key type asymmetric registered Feb 13 15:41:23.916629 kernel: Asymmetric key parser 'x509' registered Feb 13 15:41:23.916637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:41:23.916645 kernel: io scheduler mq-deadline registered Feb 13 15:41:23.916652 kernel: io scheduler kyber registered Feb 13 15:41:23.916660 kernel: io scheduler bfq registered Feb 13 15:41:23.916668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:41:23.916676 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:41:23.916687 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:41:23.916697 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:41:23.916705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:41:23.916713 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:41:23.916721 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:41:23.916729 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:41:23.916740 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:41:23.916867 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:41:23.916879 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:41:23.916991 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:41:23.917119 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:41:23 UTC (1739461283) Feb 13 15:41:23.917241 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:41:23.917251 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:41:23.917262 kernel: efifb: probing for efifb Feb 13 15:41:23.917270 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:41:23.917278 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:41:23.917286 kernel: efifb: scrolling: redraw Feb 13 15:41:23.917294 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:41:23.917302 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:41:23.917310 kernel: fb0: EFI VGA frame buffer device Feb 13 15:41:23.917367 kernel: pstore: Using crash dump compression: deflate Feb 13 15:41:23.917375 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:41:23.917383 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:41:23.917394 kernel: Segment Routing with IPv6 Feb 13 15:41:23.917402 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:41:23.917409 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:41:23.917417 kernel: Key type dns_resolver registered Feb 13 15:41:23.917425 kernel: IPI shorthand broadcast: enabled Feb 13 15:41:23.917433 kernel: sched_clock: Marking stable (696002951, 179969242)->(908174607, -32202414) Feb 13 15:41:23.917441 kernel: registered taskstats version 1 Feb 13 15:41:23.917449 kernel: Loading compiled-in X.509 certificates Feb 13 15:41:23.917457 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:41:23.917467 kernel: Key type .fscrypt registered Feb 13 15:41:23.917474 kernel: Key type fscrypt-provisioning registered Feb 13 15:41:23.917482 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:41:23.917490 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:41:23.917498 kernel: ima: No architecture policies found Feb 13 15:41:23.917505 kernel: clk: Disabling unused clocks Feb 13 15:41:23.917513 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:41:23.917522 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:41:23.917532 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:41:23.917549 kernel: Run /init as init process Feb 13 15:41:23.917557 kernel: with arguments: Feb 13 15:41:23.917565 kernel: /init Feb 13 15:41:23.917573 kernel: with environment: Feb 13 15:41:23.917581 kernel: HOME=/ Feb 13 15:41:23.917588 kernel: TERM=linux Feb 13 15:41:23.917596 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:41:23.917605 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:41:23.917619 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:41:23.917628 systemd[1]: Detected virtualization kvm. Feb 13 15:41:23.917636 systemd[1]: Detected architecture x86-64. Feb 13 15:41:23.917644 systemd[1]: Running in initrd. Feb 13 15:41:23.917661 systemd[1]: No hostname configured, using default hostname. Feb 13 15:41:23.917670 systemd[1]: Hostname set to . Feb 13 15:41:23.917686 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:41:23.917695 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:41:23.917706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:23.917723 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:23.917732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:41:23.917748 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:41:23.917757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:41:23.917767 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:41:23.917782 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:41:23.917793 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:41:23.917802 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:23.917810 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:23.917818 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:41:23.917827 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:41:23.917835 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:41:23.917843 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:41:23.917852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:41:23.917865 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:41:23.917874 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:41:23.917882 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:41:23.917891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:23.917899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:23.917908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:23.917916 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:41:23.917924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:41:23.917936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:41:23.917947 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:41:23.917955 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:41:23.917971 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:41:23.917985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:41:23.917995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:23.918006 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:41:23.918017 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:23.918034 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:41:23.918071 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:41:23.918096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:41:23.918105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:23.918113 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:23.918122 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:41:23.918131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:41:23.918140 systemd-journald[193]: Journal started Feb 13 15:41:23.918161 systemd-journald[193]: Runtime Journal (/run/log/journal/6e2ca618b9aa4a3a9f119580fec08ead) is 6M, max 48.2M, 42.2M free. Feb 13 15:41:23.896597 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 15:41:23.919927 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:41:23.924789 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:41:23.928179 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:41:23.928202 kernel: Bridge firewalling registered Feb 13 15:41:23.928061 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 15:41:23.929443 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:23.938607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:41:23.939422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:23.940210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:23.942004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:41:23.952997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:23.955768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:23.959357 dracut-cmdline[226]: dracut-dracut-053 Feb 13 15:41:23.962541 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:23.967494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:41:24.004121 systemd-resolved[239]: Positive Trust Anchors: Feb 13 15:41:24.004139 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:41:24.004170 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:41:24.006691 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 15:41:24.007911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:41:24.015236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:24.047368 kernel: SCSI subsystem initialized Feb 13 15:41:24.056351 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:41:24.067369 kernel: iscsi: registered transport (tcp) Feb 13 15:41:24.088372 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:41:24.088445 kernel: QLogic iSCSI HBA Driver Feb 13 15:41:24.138668 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:41:24.155561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:41:24.181736 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:41:24.181782 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:41:24.183009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:41:24.225359 kernel: raid6: avx2x4 gen() 29235 MB/s Feb 13 15:41:24.242364 kernel: raid6: avx2x2 gen() 29460 MB/s Feb 13 15:41:24.259435 kernel: raid6: avx2x1 gen() 25606 MB/s Feb 13 15:41:24.259464 kernel: raid6: using algorithm avx2x2 gen() 29460 MB/s Feb 13 15:41:24.277542 kernel: raid6: .... xor() 19565 MB/s, rmw enabled Feb 13 15:41:24.277597 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:41:24.298342 kernel: xor: automatically using best checksumming function avx Feb 13 15:41:24.448352 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:41:24.462212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:41:24.475530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:24.493352 systemd-udevd[417]: Using default interface naming scheme 'v255'. Feb 13 15:41:24.499508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:24.526647 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:41:24.539555 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Feb 13 15:41:24.573870 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:41:24.583486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:41:24.651884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:24.669535 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:41:24.681837 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:41:24.684719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:41:24.687531 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:24.688882 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:41:24.710121 kernel: libata version 3.00 loaded. Feb 13 15:41:24.710173 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:41:24.726018 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:41:24.726220 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:41:24.726236 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:41:24.749336 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:41:24.749361 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:41:24.749377 kernel: GPT:9289727 != 19775487 Feb 13 15:41:24.749391 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:41:24.749405 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:41:24.749623 kernel: GPT:9289727 != 19775487 Feb 13 15:41:24.749640 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:41:24.749842 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:41:24.749857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:24.749871 kernel: scsi host0: ahci Feb 13 15:41:24.750070 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:41:24.750086 kernel: scsi host1: ahci Feb 13 15:41:24.750278 kernel: AES CTR mode by8 optimization enabled Feb 13 15:41:24.750295 kernel: scsi host2: ahci Feb 13 15:41:24.750526 kernel: scsi host3: ahci Feb 13 15:41:24.750726 kernel: scsi host4: ahci Feb 13 15:41:24.750918 kernel: scsi host5: ahci Feb 13 15:41:24.751113 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:41:24.751130 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:41:24.751144 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:41:24.751158 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:41:24.751172 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:41:24.751190 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:41:24.707292 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:41:24.721396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:41:24.761347 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) Feb 13 15:41:24.754543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:41:24.754783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:24.757915 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:24.761615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:24.772471 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Feb 13 15:41:24.761915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:24.766332 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:24.776828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:24.789128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:24.822861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:41:24.835004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:41:24.838428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:41:24.850569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:41:24.864525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:41:24.883652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:41:24.887306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:24.893568 disk-uuid[558]: Primary Header is updated. Feb 13 15:41:24.893568 disk-uuid[558]: Secondary Entries is updated. Feb 13 15:41:24.893568 disk-uuid[558]: Secondary Header is updated. Feb 13 15:41:24.897332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:24.902336 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:24.920698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:25.061610 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:25.061692 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:25.061706 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:25.063348 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:25.063433 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:25.064353 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:41:25.065426 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:41:25.065532 kernel: ata3.00: applying bridge limits Feb 13 15:41:25.066611 kernel: ata3.00: configured for UDMA/100 Feb 13 15:41:25.067628 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:41:25.116873 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:41:25.129950 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:41:25.129964 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:41:25.907353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:25.907887 disk-uuid[560]: The operation has completed successfully. Feb 13 15:41:25.947853 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:41:25.947973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:41:25.984630 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:41:25.987627 sh[597]: Success Feb 13 15:41:26.002341 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:41:26.039326 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:41:26.061855 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:41:26.064174 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:41:26.076362 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:41:26.076395 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:26.078219 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:41:26.078234 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:41:26.078979 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:41:26.083837 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:41:26.086052 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:41:26.099672 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:41:26.102616 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:41:26.112852 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:26.112878 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:26.112889 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:26.116333 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:26.125108 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:41:26.126959 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:26.209670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:41:26.232465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:41:26.259445 systemd-networkd[776]: lo: Link UP Feb 13 15:41:26.259455 systemd-networkd[776]: lo: Gained carrier Feb 13 15:41:26.261098 systemd-networkd[776]: Enumeration completed Feb 13 15:41:26.261456 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:26.261461 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:41:26.274734 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:41:26.274748 systemd-networkd[776]: eth0: Link UP Feb 13 15:41:26.274752 systemd-networkd[776]: eth0: Gained carrier Feb 13 15:41:26.274758 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:26.299675 systemd[1]: Reached target network.target - Network. Feb 13 15:41:26.312730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:41:26.313412 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:41:26.326499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:41:26.385817 ignition[782]: Ignition 2.20.0 Feb 13 15:41:26.385829 ignition[782]: Stage: fetch-offline Feb 13 15:41:26.385872 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:26.385882 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:26.385977 ignition[782]: parsed url from cmdline: "" Feb 13 15:41:26.385981 ignition[782]: no config URL provided Feb 13 15:41:26.385986 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:41:26.385995 ignition[782]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:41:26.386023 ignition[782]: op(1): [started] loading QEMU firmware config module Feb 13 15:41:26.386028 ignition[782]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:41:26.397763 ignition[782]: op(1): [finished] loading QEMU firmware config module Feb 13 15:41:26.435995 ignition[782]: parsing config with SHA512: 5170547eaaf17d60b7c48eca53e7f88ce95c2ae1713b8b59d3601490d721edfd1e4003687530db45267de7093dc5beb94e20f7534352f397bd966db007ba8226 Feb 13 15:41:26.439647 unknown[782]: fetched base config from "system" Feb 13 15:41:26.439684 unknown[782]: fetched user config from "qemu" Feb 13 15:41:26.440795 ignition[782]: fetch-offline: fetch-offline passed Feb 13 15:41:26.440891 ignition[782]: Ignition finished successfully Feb 13 15:41:26.443482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:41:26.445417 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:41:26.453634 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:41:26.469268 ignition[792]: Ignition 2.20.0 Feb 13 15:41:26.469282 ignition[792]: Stage: kargs Feb 13 15:41:26.469460 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:26.469480 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:26.470353 ignition[792]: kargs: kargs passed Feb 13 15:41:26.470398 ignition[792]: Ignition finished successfully Feb 13 15:41:26.473610 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:41:26.486490 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:41:26.497986 ignition[801]: Ignition 2.20.0 Feb 13 15:41:26.498000 ignition[801]: Stage: disks Feb 13 15:41:26.498193 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:26.501107 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:41:26.498207 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:26.502683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:41:26.499078 ignition[801]: disks: disks passed Feb 13 15:41:26.504698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:41:26.499121 ignition[801]: Ignition finished successfully Feb 13 15:41:26.505952 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:41:26.507932 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:41:26.508987 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:41:26.518577 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:41:26.544184 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:41:26.653606 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:41:27.088493 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:41:27.184334 kernel: EXT4-fs (vda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:41:27.184575 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:41:27.185523 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:41:27.198379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:41:27.199868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:41:27.200807 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:41:27.200846 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:41:27.200867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:41:27.207583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:41:27.209156 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:41:27.218207 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Feb 13 15:41:27.218234 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:27.218245 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:27.219938 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:27.222334 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:27.223558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:41:27.244529 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:41:27.249066 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:41:27.253547 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:41:27.256872 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:41:27.336596 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:41:27.349397 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:41:27.352529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:41:27.356339 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:27.380264 ignition[932]: INFO : Ignition 2.20.0 Feb 13 15:41:27.380264 ignition[932]: INFO : Stage: mount Feb 13 15:41:27.382192 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:27.382192 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:27.382192 ignition[932]: INFO : mount: mount passed Feb 13 15:41:27.382192 ignition[932]: INFO : Ignition finished successfully Feb 13 15:41:27.383194 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:41:27.390397 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:41:27.391552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:41:27.566567 systemd-networkd[776]: eth0: Gained IPv6LL Feb 13 15:41:28.076376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:41:28.091500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:41:28.109373 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Feb 13 15:41:28.111430 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:28.111457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:28.111472 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:28.114332 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:28.115946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:41:28.134056 ignition[961]: INFO : Ignition 2.20.0 Feb 13 15:41:28.134056 ignition[961]: INFO : Stage: files Feb 13 15:41:28.135739 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:28.135739 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:28.135739 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:41:28.139041 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:41:28.139041 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:41:28.141658 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:41:28.143116 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:41:28.144773 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 15:41:28.145831 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:41:28.148034 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:41:28.149904 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 15:41:28.205863 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:41:28.626648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:41:28.628756 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:41:28.630489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:41:28.630489 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:41:28.633944 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:41:28.633944 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:41:28.637462 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:41:28.637462 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:41:28.640989 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:41:28.642856 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:41:28.644748 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:41:28.646797 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:41:28.649547 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:41:28.651986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:41:28.654082 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 15:41:29.116671 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:41:29.443016 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:41:29.443016 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:41:29.456562 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:41:29.456562 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:41:29.469104 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:41:29.538766 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:41:29.562004 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:41:29.562004 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:41:29.562004 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:41:29.562004 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:41:29.562004 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:41:29.562004 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:41:29.562004 ignition[961]: INFO : files: files passed Feb 13 15:41:29.562004 ignition[961]: INFO : Ignition finished successfully Feb 13 15:41:29.566233 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:41:29.590709 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:41:29.598921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:41:29.608963 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:41:29.609116 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:41:29.624347 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:41:29.641737 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf Feb 13 15:41:29.641737 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:41:29.653504 initrd-setup-root-after-ignition[993]: : No such file or directory Feb 13 15:41:29.653504 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:41:29.654151 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:41:29.659281 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:41:29.681493 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:41:29.726840 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:41:29.728042 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:41:29.732176 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:41:29.739274 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:41:29.741630 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:41:29.744253 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:41:29.775406 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:41:29.795638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:41:29.809388 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:29.812110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:29.813621 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:41:29.814802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:41:29.814969 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:41:29.824684 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:41:29.825957 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:41:29.830743 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:41:29.832079 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:41:29.845639 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:41:29.848811 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:41:29.868893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:41:29.878937 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:41:29.882515 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:41:29.883784 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:41:29.884829 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:41:29.885017 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:41:29.886544 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:29.903180 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:29.906527 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:41:29.909223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:29.910817 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:41:29.911007 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:41:29.916701 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:41:29.916845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:41:29.923151 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:41:29.924270 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:41:29.928559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:29.930968 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:41:29.931301 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:41:29.931692 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:41:29.931803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:41:29.932241 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:41:29.932358 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:41:29.933781 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:41:29.933916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:41:29.934304 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:41:29.934512 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:41:29.982664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:41:29.985152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:41:29.988526 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:29.999628 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:41:30.000670 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:41:30.000863 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:30.003344 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:41:30.014519 ignition[1017]: INFO : Ignition 2.20.0 Feb 13 15:41:30.014519 ignition[1017]: INFO : Stage: umount Feb 13 15:41:30.014519 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:30.014519 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:30.003532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:41:30.026742 ignition[1017]: INFO : umount: umount passed Feb 13 15:41:30.026742 ignition[1017]: INFO : Ignition finished successfully Feb 13 15:41:30.021337 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:41:30.022714 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:41:30.032718 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:41:30.033598 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:41:30.043972 systemd[1]: Stopped target network.target - Network. Feb 13 15:41:30.057168 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:41:30.057265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:41:30.069808 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:41:30.069919 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:41:30.073507 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:41:30.073570 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:41:30.089462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:41:30.096036 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:41:30.099254 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:41:30.099613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:41:30.103781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:41:30.120927 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:41:30.121089 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:41:30.125211 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:41:30.125487 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:41:30.125613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:41:30.144931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:41:30.147446 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:41:30.147554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:30.168776 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:41:30.184129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:41:30.184210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:41:30.184784 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:41:30.184839 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:30.191022 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:41:30.191102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:30.192991 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:41:30.193042 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:30.195759 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:30.198125 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:41:30.198194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:41:30.207352 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:41:30.208506 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:41:30.210855 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:41:30.211996 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:30.215935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:41:30.216024 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:30.217185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:41:30.217224 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:30.220335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:41:30.220402 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:41:30.224039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:41:30.224099 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:41:30.225118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:41:30.225174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:30.245645 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:41:30.246205 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:41:30.246294 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:30.250573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:30.250629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:30.255270 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:41:30.255349 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:41:30.266173 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:41:30.266299 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:41:30.272979 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:41:30.273117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:41:30.275788 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:41:30.277175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:41:30.277240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:41:30.290464 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:41:30.298259 systemd[1]: Switching root. Feb 13 15:41:30.328777 systemd-journald[193]: Journal stopped Feb 13 15:41:32.324136 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:41:32.324242 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:41:32.324268 kernel: SELinux: policy capability open_perms=1 Feb 13 15:41:32.324289 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:41:32.324328 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:41:32.324356 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:41:32.324374 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:41:32.324389 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:41:32.324404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:41:32.324428 kernel: audit: type=1403 audit(1739461291.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:41:32.324449 systemd[1]: Successfully loaded SELinux policy in 45.277ms. Feb 13 15:41:32.324476 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.488ms. Feb 13 15:41:32.324495 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:41:32.324511 systemd[1]: Detected virtualization kvm. Feb 13 15:41:32.324528 systemd[1]: Detected architecture x86-64. Feb 13 15:41:32.324544 systemd[1]: Detected first boot. Feb 13 15:41:32.324567 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:41:32.324583 zram_generator::config[1062]: No configuration found. Feb 13 15:41:32.324604 kernel: Guest personality initialized and is inactive Feb 13 15:41:32.324619 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:41:32.324634 kernel: Initialized host personality Feb 13 15:41:32.324650 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:41:32.324665 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:41:32.324683 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:41:32.324700 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:41:32.324716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:41:32.324735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:41:32.324755 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:41:32.324772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:41:32.324788 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:41:32.324805 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:41:32.324821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:41:32.324838 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:41:32.324854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:41:32.324870 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:41:32.324890 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:32.324907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:32.324923 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:41:32.324940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:41:32.324957 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:41:32.324973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:41:32.324996 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:41:32.325013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:32.325033 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:41:32.325048 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:41:32.325064 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:41:32.325079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:41:32.325097 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:32.325112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:41:32.325127 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:41:32.325142 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:41:32.325157 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:41:32.325175 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:41:32.325190 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:41:32.325205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:32.325219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:32.325234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:32.325248 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:41:32.325263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:41:32.325281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:41:32.325295 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:41:32.325345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:32.325361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:41:32.325376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:41:32.325391 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:41:32.325407 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:41:32.325423 systemd[1]: Reached target machines.target - Containers. Feb 13 15:41:32.325439 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:41:32.325456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:32.325475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:41:32.325490 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:41:32.325505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:32.325519 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:41:32.325534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:32.325549 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:41:32.325564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:32.325580 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:41:32.325595 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:41:32.325615 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:41:32.325630 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:41:32.325645 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:41:32.325663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:32.325678 kernel: fuse: init (API version 7.39) Feb 13 15:41:32.325693 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:41:32.325709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:41:32.325723 kernel: loop: module loaded Feb 13 15:41:32.325738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:41:32.325757 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:41:32.325811 systemd-journald[1140]: Collecting audit messages is disabled. Feb 13 15:41:32.325843 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:41:32.325864 systemd-journald[1140]: Journal started Feb 13 15:41:32.325894 systemd-journald[1140]: Runtime Journal (/run/log/journal/6e2ca618b9aa4a3a9f119580fec08ead) is 6M, max 48.2M, 42.2M free. Feb 13 15:41:31.946014 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:41:31.964607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:41:31.966273 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:41:32.331340 kernel: ACPI: bus type drm_connector registered Feb 13 15:41:32.334603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:41:32.340084 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:41:32.340151 systemd[1]: Stopped verity-setup.service. Feb 13 15:41:32.345354 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:32.349593 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:41:32.351308 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:41:32.353763 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:41:32.356436 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:41:32.357843 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:41:32.359225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:41:32.361598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:41:32.363294 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:41:32.365201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:32.367170 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:41:32.367550 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:41:32.370214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:32.370529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:32.372001 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:41:32.372239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:41:32.373733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:32.374162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:32.375773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:41:32.375986 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:41:32.377504 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:32.377722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:32.379162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:32.380652 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:41:32.382344 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:41:32.384497 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:41:32.405535 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:41:32.426583 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:41:32.433228 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:41:32.438666 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:41:32.438725 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:41:32.447640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:41:32.450458 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:41:32.458558 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:41:32.460149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:32.465663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:41:32.469534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:41:32.470348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:41:32.474273 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:41:32.475610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:41:32.482832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:41:32.490973 systemd-journald[1140]: Time spent on flushing to /var/log/journal/6e2ca618b9aa4a3a9f119580fec08ead is 28.181ms for 1052 entries. Feb 13 15:41:32.490973 systemd-journald[1140]: System Journal (/var/log/journal/6e2ca618b9aa4a3a9f119580fec08ead) is 8M, max 195.6M, 187.6M free. Feb 13 15:41:32.550961 systemd-journald[1140]: Received client request to flush runtime journal. Feb 13 15:41:32.551010 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 15:41:32.497553 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:41:32.502270 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:41:32.507195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:32.510485 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:41:32.512609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:41:32.514379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:41:32.527970 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:41:32.538672 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:41:32.546763 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:41:32.551503 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:41:32.554618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:41:32.557014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:32.569417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:41:32.569864 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:41:32.582861 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:41:32.583902 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:41:32.594549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:41:32.599425 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 15:41:32.603658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:41:32.625940 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Feb 13 15:41:32.625965 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Feb 13 15:41:32.632516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:32.639351 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 15:41:32.683371 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 15:41:32.700341 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 15:41:32.711346 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 15:41:32.723978 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:41:32.724772 (sd-merge)[1206]: Merged extensions into '/usr'. Feb 13 15:41:32.729933 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:41:32.729951 systemd[1]: Reloading... Feb 13 15:41:32.796352 zram_generator::config[1237]: No configuration found. Feb 13 15:41:32.890605 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:41:32.940839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:33.008671 systemd[1]: Reloading finished in 278 ms. Feb 13 15:41:33.028487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:41:33.030259 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:41:33.046916 systemd[1]: Starting ensure-sysext.service... Feb 13 15:41:33.049009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:41:33.059727 systemd[1]: Reload requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:41:33.059751 systemd[1]: Reloading... Feb 13 15:41:33.085179 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:41:33.085529 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:41:33.086517 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:41:33.086819 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Feb 13 15:41:33.086902 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Feb 13 15:41:33.091737 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:41:33.091756 systemd-tmpfiles[1272]: Skipping /boot Feb 13 15:41:33.107939 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:41:33.108131 systemd-tmpfiles[1272]: Skipping /boot Feb 13 15:41:33.141376 zram_generator::config[1307]: No configuration found. Feb 13 15:41:33.248999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:33.316307 systemd[1]: Reloading finished in 256 ms. Feb 13 15:41:33.334263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:41:33.355119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:33.365946 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:33.368646 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:41:33.371254 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:41:33.376172 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:41:33.381285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:33.385668 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:41:33.390668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.390847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:33.392378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:33.398180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:33.404653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:33.406215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:33.406433 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:33.408999 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:41:33.411531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.412975 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:41:33.415924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:33.416158 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:33.418758 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:33.419176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:33.427310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:33.428374 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Feb 13 15:41:33.430123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:33.439622 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:41:33.444730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.444958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:33.452877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:33.455702 augenrules[1375]: No rules Feb 13 15:41:33.457610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:33.463642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:33.465456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:33.465655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:33.468278 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:41:33.469737 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.472188 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:41:33.475085 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:33.475601 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:33.477556 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:41:33.479675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:33.482932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:33.483156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:33.485030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:33.485253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:33.487244 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:33.488527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:33.499715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:41:33.523777 systemd[1]: Finished ensure-sysext.service. Feb 13 15:41:33.533248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.565858 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:33.567074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:33.571485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:33.577511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:41:33.581721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:33.587342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:33.590511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:33.590566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:33.593081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:41:33.593393 systemd-resolved[1343]: Positive Trust Anchors: Feb 13 15:41:33.593402 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:41:33.593433 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:41:33.596512 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:41:33.598398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:41:33.598441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:33.599388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:33.599670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:33.599804 systemd-resolved[1343]: Defaulting to hostname 'linux'. Feb 13 15:41:33.602705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:41:33.604603 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:41:33.604867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:41:33.606817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:33.607091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:33.609550 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:33.609854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:33.612897 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:41:33.615347 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1389) Feb 13 15:41:33.621972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:33.624640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:41:33.624705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:41:33.632256 augenrules[1414]: /sbin/augenrules: No change Feb 13 15:41:33.651077 augenrules[1448]: No rules Feb 13 15:41:33.659679 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:41:33.659387 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:33.659763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:33.668921 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:41:33.684658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:41:33.695534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:41:33.706338 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:41:33.715646 systemd-networkd[1431]: lo: Link UP Feb 13 15:41:33.715664 systemd-networkd[1431]: lo: Gained carrier Feb 13 15:41:33.717830 systemd-networkd[1431]: Enumeration completed Feb 13 15:41:33.718622 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:41:33.719082 systemd[1]: Reached target network.target - Network. Feb 13 15:41:33.724379 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:33.724400 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:41:33.725344 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:41:33.729519 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:41:33.729735 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:41:33.729968 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:41:33.728616 systemd-networkd[1431]: eth0: Link UP Feb 13 15:41:33.728623 systemd-networkd[1431]: eth0: Gained carrier Feb 13 15:41:33.728652 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:33.732577 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:41:33.737347 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:41:33.739779 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:41:33.741707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:41:33.744944 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:41:33.747185 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:41:33.747887 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Feb 13 15:41:33.749378 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:41:33.750406 systemd-timesyncd[1432]: Initial clock synchronization to Thu 2025-02-13 15:41:33.954005 UTC. Feb 13 15:41:33.773502 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:41:33.837367 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:41:33.839738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:33.856515 kernel: kvm_amd: TSC scaling supported Feb 13 15:41:33.856620 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:41:33.856640 kernel: kvm_amd: Nested Paging enabled Feb 13 15:41:33.856699 kernel: kvm_amd: LBR virtualization supported Feb 13 15:41:33.858372 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:41:33.858492 kernel: kvm_amd: Virtual GIF supported Feb 13 15:41:33.884357 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:41:33.903088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:33.923672 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:41:33.932621 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:41:33.942541 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:41:34.087274 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:41:34.089113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:34.090550 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:41:34.091964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:41:34.093438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:41:34.095180 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:41:34.096617 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:41:34.097994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:41:34.099488 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:41:34.099523 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:41:34.100660 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:41:34.102799 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:41:34.105896 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:41:34.110225 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:41:34.111842 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:41:34.113363 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:41:34.117718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:41:34.119307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:41:34.122122 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:41:34.124023 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:41:34.125267 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:41:34.126301 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:41:34.126771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:41:34.126811 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:41:34.127951 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:41:34.130102 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:41:34.134465 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:41:34.134784 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:41:34.139579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:41:34.140852 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:41:34.148261 jq[1482]: false Feb 13 15:41:34.152632 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:41:34.150998 dbus-daemon[1481]: [system] SELinux support is enabled Feb 13 15:41:34.156603 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:41:34.160273 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:41:34.166093 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:41:34.170579 extend-filesystems[1483]: Found loop3 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found loop4 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found loop5 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found sr0 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda1 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda2 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda3 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found usr Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda4 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda6 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda7 Feb 13 15:41:34.171764 extend-filesystems[1483]: Found vda9 Feb 13 15:41:34.171764 extend-filesystems[1483]: Checking size of /dev/vda9 Feb 13 15:41:34.194091 extend-filesystems[1483]: Resized partition /dev/vda9 Feb 13 15:41:34.173315 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:41:34.194909 extend-filesystems[1502]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:41:34.199861 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:41:34.179716 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:41:34.180499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:41:34.196587 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:41:34.203482 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:41:34.206017 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:41:34.210774 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:41:34.216083 jq[1504]: true Feb 13 15:41:34.218370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1399) Feb 13 15:41:34.218463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:41:34.218788 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:41:34.219181 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:41:34.219466 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:41:34.222943 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:41:34.223232 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:41:34.238403 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:41:34.238450 update_engine[1503]: I20250213 15:41:34.236096 1503 main.cc:92] Flatcar Update Engine starting Feb 13 15:41:34.239966 update_engine[1503]: I20250213 15:41:34.239822 1503 update_check_scheduler.cc:74] Next update check in 10m16s Feb 13 15:41:34.248806 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:41:34.266209 jq[1508]: true Feb 13 15:41:34.267575 extend-filesystems[1502]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:41:34.267575 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:41:34.267575 extend-filesystems[1502]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:41:34.272261 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Feb 13 15:41:34.274827 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:41:34.276615 systemd-logind[1495]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:41:34.276649 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:41:34.276658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:41:34.278255 systemd-logind[1495]: New seat seat0. Feb 13 15:41:34.279597 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:41:34.284550 tar[1507]: linux-amd64/LICENSE Feb 13 15:41:34.285324 tar[1507]: linux-amd64/helm Feb 13 15:41:34.286116 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:41:34.290449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:41:34.291180 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:41:34.292796 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:41:34.292969 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:41:34.329707 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:41:34.431537 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:41:34.434880 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:41:34.435943 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:41:34.440331 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:41:34.576333 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:41:34.626312 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:41:34.649937 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:41:34.660036 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:41:34.660544 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:41:34.669678 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:41:34.716268 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:41:34.724727 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:41:34.726855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:41:34.728632 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:41:34.742242 containerd[1509]: time="2025-02-13T15:41:34.742121432Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:41:34.769704 containerd[1509]: time="2025-02-13T15:41:34.769629696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.771704 containerd[1509]: time="2025-02-13T15:41:34.771632289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:34.771704 containerd[1509]: time="2025-02-13T15:41:34.771692799Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:41:34.771793 containerd[1509]: time="2025-02-13T15:41:34.771722711Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:41:34.772013 containerd[1509]: time="2025-02-13T15:41:34.771983105Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:41:34.772060 containerd[1509]: time="2025-02-13T15:41:34.772013119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772130 containerd[1509]: time="2025-02-13T15:41:34.772103458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772130 containerd[1509]: time="2025-02-13T15:41:34.772123546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772484 containerd[1509]: time="2025-02-13T15:41:34.772451622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772518 containerd[1509]: time="2025-02-13T15:41:34.772480804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772518 containerd[1509]: time="2025-02-13T15:41:34.772508896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772567 containerd[1509]: time="2025-02-13T15:41:34.772525428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.772674 containerd[1509]: time="2025-02-13T15:41:34.772652523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.773009 containerd[1509]: time="2025-02-13T15:41:34.772977342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:34.773236 containerd[1509]: time="2025-02-13T15:41:34.773205030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:34.773236 containerd[1509]: time="2025-02-13T15:41:34.773232341Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:41:34.773414 containerd[1509]: time="2025-02-13T15:41:34.773386284Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:41:34.773502 containerd[1509]: time="2025-02-13T15:41:34.773474784Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:41:34.779977 containerd[1509]: time="2025-02-13T15:41:34.779855499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:41:34.779977 containerd[1509]: time="2025-02-13T15:41:34.779943373Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:41:34.779977 containerd[1509]: time="2025-02-13T15:41:34.779965187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:41:34.780086 containerd[1509]: time="2025-02-13T15:41:34.779987196Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:41:34.780086 containerd[1509]: time="2025-02-13T15:41:34.780006914Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:41:34.780261 containerd[1509]: time="2025-02-13T15:41:34.780226125Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:41:34.780585 containerd[1509]: time="2025-02-13T15:41:34.780553246Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:41:34.780753 containerd[1509]: time="2025-02-13T15:41:34.780720670Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:41:34.780753 containerd[1509]: time="2025-02-13T15:41:34.780746645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:41:34.780811 containerd[1509]: time="2025-02-13T15:41:34.780766014Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:41:34.780811 containerd[1509]: time="2025-02-13T15:41:34.780783307Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780849 containerd[1509]: time="2025-02-13T15:41:34.780809838Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780849 containerd[1509]: time="2025-02-13T15:41:34.780826966Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780849 containerd[1509]: time="2025-02-13T15:41:34.780844568Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780902 containerd[1509]: time="2025-02-13T15:41:34.780862580Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780902 containerd[1509]: time="2025-02-13T15:41:34.780879648Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780902 containerd[1509]: time="2025-02-13T15:41:34.780896488Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780963 containerd[1509]: time="2025-02-13T15:41:34.780911819Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:41:34.780963 containerd[1509]: time="2025-02-13T15:41:34.780937445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.780963 containerd[1509]: time="2025-02-13T15:41:34.780954276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781029 containerd[1509]: time="2025-02-13T15:41:34.780970090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781029 containerd[1509]: time="2025-02-13T15:41:34.780986870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781029 containerd[1509]: time="2025-02-13T15:41:34.781002549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781029 containerd[1509]: time="2025-02-13T15:41:34.781019154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781107 containerd[1509]: time="2025-02-13T15:41:34.781035656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781107 containerd[1509]: time="2025-02-13T15:41:34.781052877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781107 containerd[1509]: time="2025-02-13T15:41:34.781068752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781107 containerd[1509]: time="2025-02-13T15:41:34.781086877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781107 containerd[1509]: time="2025-02-13T15:41:34.781101232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781200 containerd[1509]: time="2025-02-13T15:41:34.781123221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781200 containerd[1509]: time="2025-02-13T15:41:34.781139682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781200 containerd[1509]: time="2025-02-13T15:41:34.781158824Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:41:34.781200 containerd[1509]: time="2025-02-13T15:41:34.781184461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781282 containerd[1509]: time="2025-02-13T15:41:34.781201117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781282 containerd[1509]: time="2025-02-13T15:41:34.781216807Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:41:34.781282 containerd[1509]: time="2025-02-13T15:41:34.781272149Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:41:34.781361 containerd[1509]: time="2025-02-13T15:41:34.781295382Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:41:34.781361 containerd[1509]: time="2025-02-13T15:41:34.781310076Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:41:34.781361 containerd[1509]: time="2025-02-13T15:41:34.781325560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:41:34.781489 containerd[1509]: time="2025-02-13T15:41:34.781456496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781489 containerd[1509]: time="2025-02-13T15:41:34.781481948Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:41:34.781529 containerd[1509]: time="2025-02-13T15:41:34.781496180Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:41:34.781529 containerd[1509]: time="2025-02-13T15:41:34.781511746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:41:34.781976 containerd[1509]: time="2025-02-13T15:41:34.781906940Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:41:34.781976 containerd[1509]: time="2025-02-13T15:41:34.781974427Z" level=info msg="Connect containerd service" Feb 13 15:41:34.782125 containerd[1509]: time="2025-02-13T15:41:34.782003629Z" level=info msg="using legacy CRI server" Feb 13 15:41:34.782125 containerd[1509]: time="2025-02-13T15:41:34.782012744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:41:34.787755 containerd[1509]: time="2025-02-13T15:41:34.787458602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:41:34.788709 containerd[1509]: time="2025-02-13T15:41:34.788661251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789050136Z" level=info msg="Start subscribing containerd event" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789146609Z" level=info msg="Start recovering state" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789087815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789300799Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789305700Z" level=info msg="Start event monitor" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789390501Z" level=info msg="Start snapshots syncer" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789406552Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789417495Z" level=info msg="Start streaming server" Feb 13 15:41:34.790249 containerd[1509]: time="2025-02-13T15:41:34.789510094Z" level=info msg="containerd successfully booted in 0.050870s" Feb 13 15:41:34.789831 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:41:34.956490 tar[1507]: linux-amd64/README.md Feb 13 15:41:34.972399 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:41:35.758930 systemd-networkd[1431]: eth0: Gained IPv6LL Feb 13 15:41:35.762639 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:41:35.764709 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:41:35.776571 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:41:35.779149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:35.781582 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:41:35.800697 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:41:35.801001 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:41:35.803428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:41:35.806301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:41:36.739134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:36.740919 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:41:36.742526 systemd[1]: Startup finished in 828ms (kernel) + 7.303s (initrd) + 5.775s (userspace) = 13.907s. Feb 13 15:41:36.745051 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:37.347430 kubelet[1593]: E0213 15:41:37.347359 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:37.351652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:37.351871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:37.352321 systemd[1]: kubelet.service: Consumed 1.401s CPU time, 253.2M memory peak. Feb 13 15:41:40.016811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:41:40.027566 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:57100.service - OpenSSH per-connection server daemon (10.0.0.1:57100). Feb 13 15:41:40.072837 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 57100 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:40.074969 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.082595 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:41:40.094596 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:41:40.101107 systemd-logind[1495]: New session 1 of user core. Feb 13 15:41:40.107757 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:41:40.122796 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:41:40.126310 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:41:40.128962 systemd-logind[1495]: New session c1 of user core. Feb 13 15:41:40.293873 systemd[1610]: Queued start job for default target default.target. Feb 13 15:41:40.306995 systemd[1610]: Created slice app.slice - User Application Slice. Feb 13 15:41:40.307037 systemd[1610]: Reached target paths.target - Paths. Feb 13 15:41:40.307102 systemd[1610]: Reached target timers.target - Timers. Feb 13 15:41:40.309061 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:41:40.320764 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:41:40.320917 systemd[1610]: Reached target sockets.target - Sockets. Feb 13 15:41:40.320965 systemd[1610]: Reached target basic.target - Basic System. Feb 13 15:41:40.321015 systemd[1610]: Reached target default.target - Main User Target. Feb 13 15:41:40.321055 systemd[1610]: Startup finished in 184ms. Feb 13 15:41:40.321743 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:41:40.335519 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:41:40.413634 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:57106.service - OpenSSH per-connection server daemon (10.0.0.1:57106). Feb 13 15:41:40.447533 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 57106 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:40.449063 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.453489 systemd-logind[1495]: New session 2 of user core. Feb 13 15:41:40.466554 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:41:40.520372 sshd[1623]: Connection closed by 10.0.0.1 port 57106 Feb 13 15:41:40.520753 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:40.533179 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:57106.service: Deactivated successfully. Feb 13 15:41:40.535163 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:41:40.536899 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:41:40.538271 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:57120.service - OpenSSH per-connection server daemon (10.0.0.1:57120). Feb 13 15:41:40.539231 systemd-logind[1495]: Removed session 2. Feb 13 15:41:40.578537 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 57120 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:40.580080 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.584608 systemd-logind[1495]: New session 3 of user core. Feb 13 15:41:40.598463 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:41:40.647647 sshd[1631]: Connection closed by 10.0.0.1 port 57120 Feb 13 15:41:40.647985 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:40.656283 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:57120.service: Deactivated successfully. Feb 13 15:41:40.658257 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:41:40.659888 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:41:40.670660 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Feb 13 15:41:40.671627 systemd-logind[1495]: Removed session 3. Feb 13 15:41:40.704231 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:40.705679 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.709734 systemd-logind[1495]: New session 4 of user core. Feb 13 15:41:40.729487 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:41:40.782611 sshd[1639]: Connection closed by 10.0.0.1 port 57122 Feb 13 15:41:40.783033 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:40.795639 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:57122.service: Deactivated successfully. Feb 13 15:41:40.797645 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:41:40.799386 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:41:40.816704 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:57136.service - OpenSSH per-connection server daemon (10.0.0.1:57136). Feb 13 15:41:40.817884 systemd-logind[1495]: Removed session 4. Feb 13 15:41:40.852001 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 57136 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:40.853778 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.857945 systemd-logind[1495]: New session 5 of user core. Feb 13 15:41:40.871542 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:41:40.932027 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:41:40.932381 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:40.953290 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:40.954924 sshd[1648]: Connection closed by 10.0.0.1 port 57136 Feb 13 15:41:40.955337 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:40.967049 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:57136.service: Deactivated successfully. Feb 13 15:41:40.968780 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:41:40.970529 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:41:40.980565 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). Feb 13 15:41:40.981509 systemd-logind[1495]: Removed session 5. Feb 13 15:41:41.016707 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:41.018593 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:41.023635 systemd-logind[1495]: New session 6 of user core. Feb 13 15:41:41.034461 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:41:41.090197 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:41:41.090548 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:41.094083 sudo[1659]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:41.100769 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:41:41.101086 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:41.122592 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:41.152823 augenrules[1681]: No rules Feb 13 15:41:41.154691 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:41.154981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:41.156100 sudo[1658]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:41.157633 sshd[1657]: Connection closed by 10.0.0.1 port 57150 Feb 13 15:41:41.157933 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:41.176045 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:57150.service: Deactivated successfully. Feb 13 15:41:41.178829 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:41:41.180993 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:41:41.192866 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:57166.service - OpenSSH per-connection server daemon (10.0.0.1:57166). Feb 13 15:41:41.194167 systemd-logind[1495]: Removed session 6. Feb 13 15:41:41.229063 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 57166 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:41.231090 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:41.236059 systemd-logind[1495]: New session 7 of user core. Feb 13 15:41:41.248534 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:41:41.304337 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:41:41.304686 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:41.608542 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:41:41.608746 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:41:42.070506 dockerd[1714]: time="2025-02-13T15:41:42.070361073Z" level=info msg="Starting up" Feb 13 15:41:42.540548 dockerd[1714]: time="2025-02-13T15:41:42.540366998Z" level=info msg="Loading containers: start." Feb 13 15:41:42.734358 kernel: Initializing XFRM netlink socket Feb 13 15:41:42.827444 systemd-networkd[1431]: docker0: Link UP Feb 13 15:41:42.863150 dockerd[1714]: time="2025-02-13T15:41:42.863097807Z" level=info msg="Loading containers: done." Feb 13 15:41:42.891354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3911226094-merged.mount: Deactivated successfully. Feb 13 15:41:42.893822 dockerd[1714]: time="2025-02-13T15:41:42.893756723Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:41:42.893961 dockerd[1714]: time="2025-02-13T15:41:42.893913501Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:41:42.894095 dockerd[1714]: time="2025-02-13T15:41:42.894064043Z" level=info msg="Daemon has completed initialization" Feb 13 15:41:42.973868 dockerd[1714]: time="2025-02-13T15:41:42.973783789Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:41:42.974016 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:41:43.649545 containerd[1509]: time="2025-02-13T15:41:43.649488553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 15:41:46.051463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929870602.mount: Deactivated successfully. Feb 13 15:41:47.602377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:41:47.612528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:47.785999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:47.791068 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:48.234823 kubelet[1971]: E0213 15:41:48.234765 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:48.241494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:48.241698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:48.242032 systemd[1]: kubelet.service: Consumed 270ms CPU time, 106.8M memory peak. Feb 13 15:41:49.851244 containerd[1509]: time="2025-02-13T15:41:49.851172524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:49.903116 containerd[1509]: time="2025-02-13T15:41:49.903057909Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 15:41:49.925045 containerd[1509]: time="2025-02-13T15:41:49.924970110Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:49.977760 containerd[1509]: time="2025-02-13T15:41:49.977696299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:49.978925 containerd[1509]: time="2025-02-13T15:41:49.978869580Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 6.329336465s" Feb 13 15:41:49.978925 containerd[1509]: time="2025-02-13T15:41:49.978924743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 15:41:49.979645 containerd[1509]: time="2025-02-13T15:41:49.979611419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 15:41:53.096440 containerd[1509]: time="2025-02-13T15:41:53.096362346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:53.101639 containerd[1509]: time="2025-02-13T15:41:53.101541107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 15:41:53.104764 containerd[1509]: time="2025-02-13T15:41:53.104713381Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:53.109464 containerd[1509]: time="2025-02-13T15:41:53.109410309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:53.110741 containerd[1509]: time="2025-02-13T15:41:53.110702247Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 3.131046355s" Feb 13 15:41:53.110787 containerd[1509]: time="2025-02-13T15:41:53.110746518Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 15:41:53.111524 containerd[1509]: time="2025-02-13T15:41:53.111292433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 15:41:54.665742 containerd[1509]: time="2025-02-13T15:41:54.665668696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:54.670938 containerd[1509]: time="2025-02-13T15:41:54.670877184Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 15:41:54.677056 containerd[1509]: time="2025-02-13T15:41:54.676987213Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:54.684123 containerd[1509]: time="2025-02-13T15:41:54.684083241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:54.685244 containerd[1509]: time="2025-02-13T15:41:54.685197908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.573851547s" Feb 13 15:41:54.685244 containerd[1509]: time="2025-02-13T15:41:54.685230637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 15:41:54.686079 containerd[1509]: time="2025-02-13T15:41:54.686057717Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 15:41:57.107972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1908773297.mount: Deactivated successfully. Feb 13 15:41:57.890662 containerd[1509]: time="2025-02-13T15:41:57.890584604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:57.896611 containerd[1509]: time="2025-02-13T15:41:57.896543266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 15:41:57.910497 containerd[1509]: time="2025-02-13T15:41:57.910449951Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:57.933662 containerd[1509]: time="2025-02-13T15:41:57.933611397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:57.934263 containerd[1509]: time="2025-02-13T15:41:57.934216878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 3.248128745s" Feb 13 15:41:57.934263 containerd[1509]: time="2025-02-13T15:41:57.934258154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 15:41:57.935039 containerd[1509]: time="2025-02-13T15:41:57.934987254Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 15:41:58.492501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:41:58.510553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:58.663697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:58.668587 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:58.867633 kubelet[2008]: E0213 15:41:58.867479 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:58.871964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:58.872168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:58.872568 systemd[1]: kubelet.service: Consumed 203ms CPU time, 104.8M memory peak. Feb 13 15:42:01.062556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226158757.mount: Deactivated successfully. Feb 13 15:42:02.356927 containerd[1509]: time="2025-02-13T15:42:02.356849758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.358194 containerd[1509]: time="2025-02-13T15:42:02.358156372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 15:42:02.359577 containerd[1509]: time="2025-02-13T15:42:02.359530141Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.363170 containerd[1509]: time="2025-02-13T15:42:02.363136383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.364525 containerd[1509]: time="2025-02-13T15:42:02.364480970Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.429443257s" Feb 13 15:42:02.364525 containerd[1509]: time="2025-02-13T15:42:02.364519666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 15:42:02.365038 containerd[1509]: time="2025-02-13T15:42:02.365015226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:42:02.838178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110477897.mount: Deactivated successfully. Feb 13 15:42:02.844204 containerd[1509]: time="2025-02-13T15:42:02.844141297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.845031 containerd[1509]: time="2025-02-13T15:42:02.844965172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 15:42:02.846429 containerd[1509]: time="2025-02-13T15:42:02.846385678Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.849181 containerd[1509]: time="2025-02-13T15:42:02.849135834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:02.849874 containerd[1509]: time="2025-02-13T15:42:02.849829506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 484.779684ms" Feb 13 15:42:02.849874 containerd[1509]: time="2025-02-13T15:42:02.849861746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:42:02.850468 containerd[1509]: time="2025-02-13T15:42:02.850422709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 15:42:03.591165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293809557.mount: Deactivated successfully. Feb 13 15:42:05.517876 containerd[1509]: time="2025-02-13T15:42:05.517807356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:05.518725 containerd[1509]: time="2025-02-13T15:42:05.518680869Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 15:42:05.520263 containerd[1509]: time="2025-02-13T15:42:05.520196889Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:05.523711 containerd[1509]: time="2025-02-13T15:42:05.523681359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:05.525123 containerd[1509]: time="2025-02-13T15:42:05.525099034Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.674640629s" Feb 13 15:42:05.525175 containerd[1509]: time="2025-02-13T15:42:05.525124302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 15:42:07.510983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:07.511149 systemd[1]: kubelet.service: Consumed 203ms CPU time, 104.8M memory peak. Feb 13 15:42:07.527551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:07.557000 systemd[1]: Reload requested from client PID 2156 ('systemctl') (unit session-7.scope)... Feb 13 15:42:07.557018 systemd[1]: Reloading... Feb 13 15:42:07.664361 zram_generator::config[2203]: No configuration found. Feb 13 15:42:07.986720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:42:08.091305 systemd[1]: Reloading finished in 533 ms. Feb 13 15:42:08.141225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:08.146176 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:08.147396 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:42:08.147789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:08.147847 systemd[1]: kubelet.service: Consumed 157ms CPU time, 91.9M memory peak. Feb 13 15:42:08.149722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:08.312332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:08.316449 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:42:08.351902 kubelet[2250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:08.351902 kubelet[2250]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:42:08.351902 kubelet[2250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:08.352423 kubelet[2250]: I0213 15:42:08.351994 2250 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:42:08.469730 kubelet[2250]: I0213 15:42:08.469683 2250 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:42:08.469730 kubelet[2250]: I0213 15:42:08.469723 2250 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:42:08.470133 kubelet[2250]: I0213 15:42:08.470101 2250 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:42:08.492122 kubelet[2250]: E0213 15:42:08.492059 2250 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:08.495500 kubelet[2250]: I0213 15:42:08.495466 2250 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:42:08.503780 kubelet[2250]: E0213 15:42:08.503724 2250 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:42:08.503780 kubelet[2250]: I0213 15:42:08.503778 2250 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:42:08.508985 kubelet[2250]: I0213 15:42:08.508954 2250 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:42:08.509924 kubelet[2250]: I0213 15:42:08.509863 2250 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:42:08.510142 kubelet[2250]: I0213 15:42:08.509904 2250 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:42:08.510142 kubelet[2250]: I0213 15:42:08.510134 2250 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:42:08.510288 kubelet[2250]: I0213 15:42:08.510147 2250 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:42:08.510358 kubelet[2250]: I0213 15:42:08.510333 2250 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:08.513359 kubelet[2250]: I0213 15:42:08.513308 2250 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:42:08.513359 kubelet[2250]: I0213 15:42:08.513352 2250 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:42:08.513480 kubelet[2250]: I0213 15:42:08.513378 2250 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:42:08.513480 kubelet[2250]: I0213 15:42:08.513393 2250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:42:08.519828 kubelet[2250]: I0213 15:42:08.519784 2250 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:42:08.519828 kubelet[2250]: W0213 15:42:08.519766 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:08.519986 kubelet[2250]: E0213 15:42:08.519853 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:08.519986 kubelet[2250]: W0213 15:42:08.519740 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:08.519986 kubelet[2250]: E0213 15:42:08.519902 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:08.521941 kubelet[2250]: I0213 15:42:08.520414 2250 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:42:08.521941 kubelet[2250]: W0213 15:42:08.520995 2250 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:42:08.523069 kubelet[2250]: I0213 15:42:08.523028 2250 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:42:08.523248 kubelet[2250]: I0213 15:42:08.523177 2250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:42:08.526334 kubelet[2250]: I0213 15:42:08.523571 2250 server.go:1287] "Started kubelet" Feb 13 15:42:08.526334 kubelet[2250]: I0213 15:42:08.524334 2250 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:42:08.526406 kubelet[2250]: I0213 15:42:08.526377 2250 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:42:08.526481 kubelet[2250]: I0213 15:42:08.526437 2250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:42:08.529630 kubelet[2250]: I0213 15:42:08.529115 2250 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:42:08.529630 kubelet[2250]: I0213 15:42:08.529119 2250 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:42:08.531689 kubelet[2250]: E0213 15:42:08.529382 2250 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cedcbb257ae1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:42:08.523066081 +0000 UTC m=+0.202928427,LastTimestamp:2025-02-13 15:42:08.523066081 +0000 UTC m=+0.202928427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:42:08.531689 kubelet[2250]: E0213 15:42:08.530984 2250 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:42:08.531689 kubelet[2250]: E0213 15:42:08.531041 2250 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:08.531689 kubelet[2250]: I0213 15:42:08.531066 2250 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:42:08.531689 kubelet[2250]: I0213 15:42:08.531214 2250 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:42:08.531689 kubelet[2250]: I0213 15:42:08.531273 2250 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:42:08.531689 kubelet[2250]: I0213 15:42:08.531566 2250 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:42:08.531689 kubelet[2250]: W0213 15:42:08.531558 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:08.531932 kubelet[2250]: E0213 15:42:08.531596 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:08.531932 kubelet[2250]: I0213 15:42:08.531642 2250 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:42:08.532158 kubelet[2250]: E0213 15:42:08.532096 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Feb 13 15:42:08.532560 kubelet[2250]: I0213 15:42:08.532545 2250 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:42:08.547152 kubelet[2250]: I0213 15:42:08.547113 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:42:08.548093 kubelet[2250]: I0213 15:42:08.548071 2250 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:42:08.548221 kubelet[2250]: I0213 15:42:08.548083 2250 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:42:08.548221 kubelet[2250]: I0213 15:42:08.548120 2250 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:08.548809 kubelet[2250]: I0213 15:42:08.548557 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:42:08.548809 kubelet[2250]: I0213 15:42:08.548596 2250 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:42:08.548809 kubelet[2250]: I0213 15:42:08.548615 2250 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:42:08.548809 kubelet[2250]: I0213 15:42:08.548622 2250 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:42:08.549350 kubelet[2250]: E0213 15:42:08.549128 2250 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:42:08.549350 kubelet[2250]: W0213 15:42:08.549259 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:08.549453 kubelet[2250]: E0213 15:42:08.549389 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:08.631790 kubelet[2250]: E0213 15:42:08.631747 2250 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:08.650100 kubelet[2250]: E0213 15:42:08.650014 2250 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:42:08.732465 kubelet[2250]: E0213 15:42:08.732415 2250 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:08.732915 kubelet[2250]: E0213 15:42:08.732836 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Feb 13 15:42:08.816269 kubelet[2250]: I0213 15:42:08.816211 2250 policy_none.go:49] "None policy: Start" Feb 13 15:42:08.816269 kubelet[2250]: I0213 15:42:08.816266 2250 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:42:08.816269 kubelet[2250]: I0213 15:42:08.816282 2250 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:42:08.833305 kubelet[2250]: E0213 15:42:08.833256 2250 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:08.850348 kubelet[2250]: E0213 15:42:08.850288 2250 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:42:08.866917 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:42:08.889838 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:42:08.893765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:42:08.904430 kubelet[2250]: I0213 15:42:08.904232 2250 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:42:08.904488 kubelet[2250]: I0213 15:42:08.904449 2250 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:42:08.904511 kubelet[2250]: I0213 15:42:08.904465 2250 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:42:08.904698 kubelet[2250]: I0213 15:42:08.904679 2250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:42:08.905849 kubelet[2250]: E0213 15:42:08.905809 2250 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:42:08.905849 kubelet[2250]: E0213 15:42:08.905840 2250 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:42:09.006675 kubelet[2250]: I0213 15:42:09.006626 2250 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:09.007133 kubelet[2250]: E0213 15:42:09.007083 2250 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:42:09.133676 kubelet[2250]: E0213 15:42:09.133626 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Feb 13 15:42:09.209120 kubelet[2250]: I0213 15:42:09.209018 2250 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:09.209485 kubelet[2250]: E0213 15:42:09.209440 2250 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:42:09.259407 systemd[1]: Created slice kubepods-burstable-pod3f9f0bb8bf3457f29436bd88a6fcd8f8.slice - libcontainer container kubepods-burstable-pod3f9f0bb8bf3457f29436bd88a6fcd8f8.slice. Feb 13 15:42:09.276744 kubelet[2250]: E0213 15:42:09.276697 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:09.279634 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 15:42:09.297004 kubelet[2250]: E0213 15:42:09.296972 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:09.299734 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 15:42:09.301298 kubelet[2250]: E0213 15:42:09.301272 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:09.336774 kubelet[2250]: I0213 15:42:09.336707 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:09.337399 kubelet[2250]: I0213 15:42:09.336761 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:09.337399 kubelet[2250]: I0213 15:42:09.337123 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:09.337399 kubelet[2250]: I0213 15:42:09.337163 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:09.337399 kubelet[2250]: I0213 15:42:09.337194 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:09.337399 kubelet[2250]: I0213 15:42:09.337222 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:09.337574 kubelet[2250]: I0213 15:42:09.337246 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:09.337574 kubelet[2250]: I0213 15:42:09.337269 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:09.337574 kubelet[2250]: I0213 15:42:09.337291 2250 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:09.468119 kubelet[2250]: W0213 15:42:09.467921 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:09.468119 kubelet[2250]: E0213 15:42:09.468020 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:09.578066 kubelet[2250]: E0213 15:42:09.577999 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:09.578863 containerd[1509]: time="2025-02-13T15:42:09.578782862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f9f0bb8bf3457f29436bd88a6fcd8f8,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:09.598155 kubelet[2250]: E0213 15:42:09.598102 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:09.598706 containerd[1509]: time="2025-02-13T15:42:09.598659918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:09.602098 kubelet[2250]: E0213 15:42:09.602024 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:09.603348 containerd[1509]: time="2025-02-13T15:42:09.603295129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:09.611770 kubelet[2250]: I0213 15:42:09.611721 2250 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:09.612217 kubelet[2250]: E0213 15:42:09.612180 2250 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:42:09.926548 kubelet[2250]: W0213 15:42:09.926452 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:09.926548 kubelet[2250]: E0213 15:42:09.926534 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:09.934639 kubelet[2250]: E0213 15:42:09.934588 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Feb 13 15:42:09.956441 kubelet[2250]: W0213 15:42:09.956368 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:09.956526 kubelet[2250]: E0213 15:42:09.956449 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:10.090560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967955336.mount: Deactivated successfully. Feb 13 15:42:10.095390 containerd[1509]: time="2025-02-13T15:42:10.095268906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:10.098034 containerd[1509]: time="2025-02-13T15:42:10.097967778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:42:10.098932 containerd[1509]: time="2025-02-13T15:42:10.098883701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:10.100757 containerd[1509]: time="2025-02-13T15:42:10.100712310Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:10.101407 containerd[1509]: time="2025-02-13T15:42:10.101335503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:42:10.102212 containerd[1509]: time="2025-02-13T15:42:10.102173192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:10.103223 containerd[1509]: time="2025-02-13T15:42:10.103150001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:42:10.104481 containerd[1509]: time="2025-02-13T15:42:10.104437248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:10.105408 containerd[1509]: time="2025-02-13T15:42:10.105376162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.465715ms" Feb 13 15:42:10.107871 containerd[1509]: time="2025-02-13T15:42:10.107841625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.087686ms" Feb 13 15:42:10.112996 containerd[1509]: time="2025-02-13T15:42:10.112967564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.557688ms" Feb 13 15:42:10.118216 kubelet[2250]: W0213 15:42:10.118141 2250 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused Feb 13 15:42:10.118302 kubelet[2250]: E0213 15:42:10.118222 2250 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.228406902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.228761017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.228869828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.229123973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.228832205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.227688286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.229228294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:10.229333 containerd[1509]: time="2025-02-13T15:42:10.229274216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.229688 containerd[1509]: time="2025-02-13T15:42:10.229373908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:10.229688 containerd[1509]: time="2025-02-13T15:42:10.229401278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.231523 containerd[1509]: time="2025-02-13T15:42:10.230056543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.231923 containerd[1509]: time="2025-02-13T15:42:10.231853782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:10.254602 systemd[1]: Started cri-containerd-e52c66fcfd7c00c45fce8bc3bb614cca688f9ab87e65e7ce45f64be62d38915c.scope - libcontainer container e52c66fcfd7c00c45fce8bc3bb614cca688f9ab87e65e7ce45f64be62d38915c. Feb 13 15:42:10.259649 systemd[1]: Started cri-containerd-08da09699d03877f9dbf33193f0df1b96ecef18f8985614609437471f23e6577.scope - libcontainer container 08da09699d03877f9dbf33193f0df1b96ecef18f8985614609437471f23e6577. Feb 13 15:42:10.261611 systemd[1]: Started cri-containerd-08dcc63663b13ad3fdc28be9b9f21d653500406910650ac48c042b8a3add935f.scope - libcontainer container 08dcc63663b13ad3fdc28be9b9f21d653500406910650ac48c042b8a3add935f. Feb 13 15:42:10.297990 containerd[1509]: time="2025-02-13T15:42:10.297932000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"e52c66fcfd7c00c45fce8bc3bb614cca688f9ab87e65e7ce45f64be62d38915c\"" Feb 13 15:42:10.304742 kubelet[2250]: E0213 15:42:10.302578 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:10.306772 containerd[1509]: time="2025-02-13T15:42:10.306728075Z" level=info msg="CreateContainer within sandbox \"e52c66fcfd7c00c45fce8bc3bb614cca688f9ab87e65e7ce45f64be62d38915c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:42:10.307786 containerd[1509]: time="2025-02-13T15:42:10.307753071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f9f0bb8bf3457f29436bd88a6fcd8f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"08da09699d03877f9dbf33193f0df1b96ecef18f8985614609437471f23e6577\"" Feb 13 15:42:10.308411 kubelet[2250]: E0213 15:42:10.308383 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:10.308880 containerd[1509]: time="2025-02-13T15:42:10.308639569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"08dcc63663b13ad3fdc28be9b9f21d653500406910650ac48c042b8a3add935f\"" Feb 13 15:42:10.309706 kubelet[2250]: E0213 15:42:10.309685 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:10.310438 containerd[1509]: time="2025-02-13T15:42:10.310415871Z" level=info msg="CreateContainer within sandbox \"08da09699d03877f9dbf33193f0df1b96ecef18f8985614609437471f23e6577\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:42:10.311522 containerd[1509]: time="2025-02-13T15:42:10.311492501Z" level=info msg="CreateContainer within sandbox \"08dcc63663b13ad3fdc28be9b9f21d653500406910650ac48c042b8a3add935f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:42:10.326385 containerd[1509]: time="2025-02-13T15:42:10.326293135Z" level=info msg="CreateContainer within sandbox \"e52c66fcfd7c00c45fce8bc3bb614cca688f9ab87e65e7ce45f64be62d38915c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"398b4caee355c8a49ad79c30402bd25092809693a0669d4cfa42f7996eadad24\"" Feb 13 15:42:10.327001 containerd[1509]: time="2025-02-13T15:42:10.326976873Z" level=info msg="StartContainer for \"398b4caee355c8a49ad79c30402bd25092809693a0669d4cfa42f7996eadad24\"" Feb 13 15:42:10.333306 containerd[1509]: time="2025-02-13T15:42:10.333264890Z" level=info msg="CreateContainer within sandbox \"08dcc63663b13ad3fdc28be9b9f21d653500406910650ac48c042b8a3add935f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10483727b0d65c25401eb0403c34ec312ad2992d144d5ce08f71c7d265d06fe7\"" Feb 13 15:42:10.333894 containerd[1509]: time="2025-02-13T15:42:10.333794586Z" level=info msg="StartContainer for \"10483727b0d65c25401eb0403c34ec312ad2992d144d5ce08f71c7d265d06fe7\"" Feb 13 15:42:10.336615 containerd[1509]: time="2025-02-13T15:42:10.336592888Z" level=info msg="CreateContainer within sandbox \"08da09699d03877f9dbf33193f0df1b96ecef18f8985614609437471f23e6577\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af9b379698aec814a66a14dae28b55295c834b4e69861584de12967af876a3c4\"" Feb 13 15:42:10.337111 containerd[1509]: time="2025-02-13T15:42:10.337065717Z" level=info msg="StartContainer for \"af9b379698aec814a66a14dae28b55295c834b4e69861584de12967af876a3c4\"" Feb 13 15:42:10.358078 systemd[1]: Started cri-containerd-398b4caee355c8a49ad79c30402bd25092809693a0669d4cfa42f7996eadad24.scope - libcontainer container 398b4caee355c8a49ad79c30402bd25092809693a0669d4cfa42f7996eadad24. Feb 13 15:42:10.361865 systemd[1]: Started cri-containerd-af9b379698aec814a66a14dae28b55295c834b4e69861584de12967af876a3c4.scope - libcontainer container af9b379698aec814a66a14dae28b55295c834b4e69861584de12967af876a3c4. Feb 13 15:42:10.370181 systemd[1]: Started cri-containerd-10483727b0d65c25401eb0403c34ec312ad2992d144d5ce08f71c7d265d06fe7.scope - libcontainer container 10483727b0d65c25401eb0403c34ec312ad2992d144d5ce08f71c7d265d06fe7. Feb 13 15:42:10.411831 containerd[1509]: time="2025-02-13T15:42:10.411731839Z" level=info msg="StartContainer for \"398b4caee355c8a49ad79c30402bd25092809693a0669d4cfa42f7996eadad24\" returns successfully" Feb 13 15:42:10.411831 containerd[1509]: time="2025-02-13T15:42:10.411762106Z" level=info msg="StartContainer for \"af9b379698aec814a66a14dae28b55295c834b4e69861584de12967af876a3c4\" returns successfully" Feb 13 15:42:10.414385 kubelet[2250]: I0213 15:42:10.414200 2250 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:10.415371 kubelet[2250]: E0213 15:42:10.415309 2250 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Feb 13 15:42:10.417762 containerd[1509]: time="2025-02-13T15:42:10.417665199Z" level=info msg="StartContainer for \"10483727b0d65c25401eb0403c34ec312ad2992d144d5ce08f71c7d265d06fe7\" returns successfully" Feb 13 15:42:10.556685 kubelet[2250]: E0213 15:42:10.556575 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:10.560584 kubelet[2250]: E0213 15:42:10.559533 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:10.560584 kubelet[2250]: E0213 15:42:10.559909 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:10.561492 kubelet[2250]: E0213 15:42:10.561432 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:10.561951 kubelet[2250]: E0213 15:42:10.561868 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:10.562076 kubelet[2250]: E0213 15:42:10.562023 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:11.563989 kubelet[2250]: E0213 15:42:11.563952 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:11.564380 kubelet[2250]: E0213 15:42:11.564098 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:11.564526 kubelet[2250]: E0213 15:42:11.564504 2250 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:42:11.564602 kubelet[2250]: E0213 15:42:11.564589 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:11.725597 kubelet[2250]: E0213 15:42:11.725528 2250 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:42:12.017701 kubelet[2250]: I0213 15:42:12.017660 2250 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:12.055289 kubelet[2250]: I0213 15:42:12.055251 2250 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 15:42:12.132182 kubelet[2250]: I0213 15:42:12.132132 2250 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:12.360982 kubelet[2250]: E0213 15:42:12.360833 2250 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:12.360982 kubelet[2250]: I0213 15:42:12.360870 2250 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:12.362645 kubelet[2250]: E0213 15:42:12.362604 2250 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:12.362645 kubelet[2250]: I0213 15:42:12.362631 2250 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:12.363979 kubelet[2250]: E0213 15:42:12.363958 2250 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:12.514938 kubelet[2250]: I0213 15:42:12.514867 2250 apiserver.go:52] "Watching apiserver" Feb 13 15:42:12.531627 kubelet[2250]: I0213 15:42:12.531590 2250 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:42:12.950467 kubelet[2250]: I0213 15:42:12.950433 2250 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:12.955347 kubelet[2250]: E0213 15:42:12.955310 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:13.566016 kubelet[2250]: E0213 15:42:13.565986 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:13.761910 systemd[1]: Reload requested from client PID 2531 ('systemctl') (unit session-7.scope)... Feb 13 15:42:13.761926 systemd[1]: Reloading... Feb 13 15:42:13.868362 zram_generator::config[2578]: No configuration found. Feb 13 15:42:13.883096 kubelet[2250]: I0213 15:42:13.883063 2250 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:13.889903 kubelet[2250]: E0213 15:42:13.889861 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:13.989099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:42:14.108157 systemd[1]: Reloading finished in 345 ms. Feb 13 15:42:14.138545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:14.160871 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:42:14.161177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:14.161239 systemd[1]: kubelet.service: Consumed 694ms CPU time, 129.3M memory peak. Feb 13 15:42:14.171570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:14.356063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:14.360444 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:42:14.408982 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:14.408982 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:42:14.408982 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:14.408982 kubelet[2620]: I0213 15:42:14.408945 2620 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:42:14.417092 kubelet[2620]: I0213 15:42:14.417042 2620 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:42:14.417092 kubelet[2620]: I0213 15:42:14.417070 2620 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:42:14.417299 kubelet[2620]: I0213 15:42:14.417281 2620 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:42:14.418423 kubelet[2620]: I0213 15:42:14.418402 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:42:14.420908 kubelet[2620]: I0213 15:42:14.420775 2620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:42:14.423921 kubelet[2620]: E0213 15:42:14.423891 2620 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:42:14.423921 kubelet[2620]: I0213 15:42:14.423919 2620 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:42:14.429369 kubelet[2620]: I0213 15:42:14.429309 2620 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:42:14.429799 kubelet[2620]: I0213 15:42:14.429743 2620 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:42:14.430038 kubelet[2620]: I0213 15:42:14.429787 2620 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:42:14.430038 kubelet[2620]: I0213 15:42:14.430031 2620 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:42:14.430038 kubelet[2620]: I0213 15:42:14.430044 2620 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:42:14.430345 kubelet[2620]: I0213 15:42:14.430095 2620 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:14.430345 kubelet[2620]: I0213 15:42:14.430284 2620 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:42:14.430345 kubelet[2620]: I0213 15:42:14.430301 2620 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:42:14.430345 kubelet[2620]: I0213 15:42:14.430345 2620 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:42:14.430478 kubelet[2620]: I0213 15:42:14.430360 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:42:14.434349 kubelet[2620]: I0213 15:42:14.433377 2620 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:42:14.440089 kubelet[2620]: I0213 15:42:14.440046 2620 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:42:14.440843 kubelet[2620]: I0213 15:42:14.440806 2620 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:42:14.440884 kubelet[2620]: I0213 15:42:14.440854 2620 server.go:1287] "Started kubelet" Feb 13 15:42:14.441239 kubelet[2620]: I0213 15:42:14.441214 2620 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:42:14.443285 kubelet[2620]: I0213 15:42:14.443243 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:42:14.443831 kubelet[2620]: I0213 15:42:14.443798 2620 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:42:14.448831 kubelet[2620]: I0213 15:42:14.446535 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:42:14.448831 kubelet[2620]: I0213 15:42:14.446776 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:42:14.450521 kubelet[2620]: I0213 15:42:14.449966 2620 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:42:14.452141 kubelet[2620]: E0213 15:42:14.451865 2620 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:14.452141 kubelet[2620]: I0213 15:42:14.451904 2620 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:42:14.452141 kubelet[2620]: I0213 15:42:14.452095 2620 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:42:14.452264 kubelet[2620]: I0213 15:42:14.452253 2620 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:42:14.452724 kubelet[2620]: E0213 15:42:14.452699 2620 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:42:14.452874 kubelet[2620]: I0213 15:42:14.452846 2620 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:42:14.453017 kubelet[2620]: I0213 15:42:14.452988 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:42:14.454114 kubelet[2620]: I0213 15:42:14.454098 2620 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:42:14.470208 kubelet[2620]: I0213 15:42:14.470129 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:42:14.474457 kubelet[2620]: I0213 15:42:14.474425 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:42:14.474638 kubelet[2620]: I0213 15:42:14.474624 2620 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:42:14.474773 kubelet[2620]: I0213 15:42:14.474734 2620 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:42:14.474773 kubelet[2620]: I0213 15:42:14.474749 2620 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:42:14.475521 kubelet[2620]: E0213 15:42:14.475468 2620 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:42:14.506077 kubelet[2620]: I0213 15:42:14.506041 2620 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:42:14.506077 kubelet[2620]: I0213 15:42:14.506063 2620 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:42:14.506077 kubelet[2620]: I0213 15:42:14.506081 2620 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:14.506283 kubelet[2620]: I0213 15:42:14.506226 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:42:14.506283 kubelet[2620]: I0213 15:42:14.506237 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:42:14.506283 kubelet[2620]: I0213 15:42:14.506257 2620 policy_none.go:49] "None policy: Start" Feb 13 15:42:14.506283 kubelet[2620]: I0213 15:42:14.506266 2620 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:42:14.506283 kubelet[2620]: I0213 15:42:14.506276 2620 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:42:14.506449 kubelet[2620]: I0213 15:42:14.506393 2620 state_mem.go:75] "Updated machine memory state" Feb 13 15:42:14.511060 kubelet[2620]: I0213 15:42:14.511036 2620 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:42:14.511376 kubelet[2620]: I0213 15:42:14.511362 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:42:14.511480 kubelet[2620]: I0213 15:42:14.511436 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:42:14.511756 kubelet[2620]: I0213 15:42:14.511740 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:42:14.512955 kubelet[2620]: E0213 15:42:14.512728 2620 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:42:14.576397 kubelet[2620]: I0213 15:42:14.576339 2620 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:14.576918 kubelet[2620]: I0213 15:42:14.576453 2620 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:14.576918 kubelet[2620]: I0213 15:42:14.576551 2620 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.582983 kubelet[2620]: E0213 15:42:14.582940 2620 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:14.583583 kubelet[2620]: E0213 15:42:14.583294 2620 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.616744 kubelet[2620]: I0213 15:42:14.616713 2620 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:42:14.621935 kubelet[2620]: I0213 15:42:14.621888 2620 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 15:42:14.622091 kubelet[2620]: I0213 15:42:14.621978 2620 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 15:42:14.653428 kubelet[2620]: I0213 15:42:14.653359 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:14.653428 kubelet[2620]: I0213 15:42:14.653433 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:14.653617 kubelet[2620]: I0213 15:42:14.653460 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.653617 kubelet[2620]: I0213 15:42:14.653477 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f9f0bb8bf3457f29436bd88a6fcd8f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f9f0bb8bf3457f29436bd88a6fcd8f8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:14.653617 kubelet[2620]: I0213 15:42:14.653510 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.653617 kubelet[2620]: I0213 15:42:14.653532 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.653617 kubelet[2620]: I0213 15:42:14.653552 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.653736 kubelet[2620]: I0213 15:42:14.653573 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:14.653736 kubelet[2620]: I0213 15:42:14.653594 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:14.884690 kubelet[2620]: E0213 15:42:14.884395 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:14.884690 kubelet[2620]: E0213 15:42:14.884569 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:14.884690 kubelet[2620]: E0213 15:42:14.884618 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:15.431411 kubelet[2620]: I0213 15:42:15.431346 2620 apiserver.go:52] "Watching apiserver" Feb 13 15:42:15.501503 kubelet[2620]: E0213 15:42:15.501398 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:15.501503 kubelet[2620]: E0213 15:42:15.501462 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:15.501503 kubelet[2620]: E0213 15:42:15.501512 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:15.553425 kubelet[2620]: I0213 15:42:15.553379 2620 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:42:15.745429 kubelet[2620]: I0213 15:42:15.745275 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.7452282009999998 podStartE2EDuration="3.745228201s" podCreationTimestamp="2025-02-13 15:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:15.744360145 +0000 UTC m=+1.380124270" watchObservedRunningTime="2025-02-13 15:42:15.745228201 +0000 UTC m=+1.380992336" Feb 13 15:42:15.790140 kubelet[2620]: I0213 15:42:15.789941 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.789916943 podStartE2EDuration="2.789916943s" podCreationTimestamp="2025-02-13 15:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:15.761279999 +0000 UTC m=+1.397044124" watchObservedRunningTime="2025-02-13 15:42:15.789916943 +0000 UTC m=+1.425681068" Feb 13 15:42:15.790140 kubelet[2620]: I0213 15:42:15.790043 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.790037199 podStartE2EDuration="1.790037199s" podCreationTimestamp="2025-02-13 15:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:15.789478255 +0000 UTC m=+1.425242380" watchObservedRunningTime="2025-02-13 15:42:15.790037199 +0000 UTC m=+1.425801325" Feb 13 15:42:16.503279 kubelet[2620]: E0213 15:42:16.503227 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:16.506343 kubelet[2620]: E0213 15:42:16.503849 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:16.506343 kubelet[2620]: E0213 15:42:16.504119 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:19.086050 kubelet[2620]: I0213 15:42:19.085882 2620 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:42:19.086629 kubelet[2620]: I0213 15:42:19.086395 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:42:19.086672 containerd[1509]: time="2025-02-13T15:42:19.086198859Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:42:19.290910 kubelet[2620]: E0213 15:42:19.290879 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:19.708888 systemd[1]: Created slice kubepods-besteffort-pod40b5c6b7_386b_4ba2_a3e0_71954b0a7b3e.slice - libcontainer container kubepods-besteffort-pod40b5c6b7_386b_4ba2_a3e0_71954b0a7b3e.slice. Feb 13 15:42:19.791764 kubelet[2620]: I0213 15:42:19.791699 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e-kube-proxy\") pod \"kube-proxy-jtbkm\" (UID: \"40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e\") " pod="kube-system/kube-proxy-jtbkm" Feb 13 15:42:19.791764 kubelet[2620]: I0213 15:42:19.791754 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e-xtables-lock\") pod \"kube-proxy-jtbkm\" (UID: \"40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e\") " pod="kube-system/kube-proxy-jtbkm" Feb 13 15:42:19.791764 kubelet[2620]: I0213 15:42:19.791775 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e-lib-modules\") pod \"kube-proxy-jtbkm\" (UID: \"40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e\") " pod="kube-system/kube-proxy-jtbkm" Feb 13 15:42:19.792022 kubelet[2620]: I0213 15:42:19.791797 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbzbv\" (UniqueName: \"kubernetes.io/projected/40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e-kube-api-access-tbzbv\") pod \"kube-proxy-jtbkm\" (UID: \"40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e\") " pod="kube-system/kube-proxy-jtbkm" Feb 13 15:42:19.838670 update_engine[1503]: I20250213 15:42:19.838498 1503 update_attempter.cc:509] Updating boot flags... Feb 13 15:42:19.929370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2699) Feb 13 15:42:19.987443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2697) Feb 13 15:42:20.024166 kubelet[2620]: E0213 15:42:20.020623 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:20.024334 containerd[1509]: time="2025-02-13T15:42:20.022392076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtbkm,Uid:40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:20.047346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2697) Feb 13 15:42:20.173914 containerd[1509]: time="2025-02-13T15:42:20.173619569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:20.173914 containerd[1509]: time="2025-02-13T15:42:20.173697772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:20.173914 containerd[1509]: time="2025-02-13T15:42:20.173713985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:20.173914 containerd[1509]: time="2025-02-13T15:42:20.173812560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:20.196006 systemd[1]: Created slice kubepods-besteffort-pod858ae04d_574f_410b_be65_03b863b27bb8.slice - libcontainer container kubepods-besteffort-pod858ae04d_574f_410b_be65_03b863b27bb8.slice. Feb 13 15:42:20.199454 kubelet[2620]: I0213 15:42:20.199431 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglbr\" (UniqueName: \"kubernetes.io/projected/858ae04d-574f-410b-be65-03b863b27bb8-kube-api-access-sglbr\") pod \"tigera-operator-7d68577dc5-g7jjd\" (UID: \"858ae04d-574f-410b-be65-03b863b27bb8\") " pod="tigera-operator/tigera-operator-7d68577dc5-g7jjd" Feb 13 15:42:20.200354 kubelet[2620]: I0213 15:42:20.199918 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/858ae04d-574f-410b-be65-03b863b27bb8-var-lib-calico\") pod \"tigera-operator-7d68577dc5-g7jjd\" (UID: \"858ae04d-574f-410b-be65-03b863b27bb8\") " pod="tigera-operator/tigera-operator-7d68577dc5-g7jjd" Feb 13 15:42:20.216554 systemd[1]: Started cri-containerd-0c2e385c58df026a5169aed66091bd01bc8a54dc11bfd7c98f4316c56436b4b8.scope - libcontainer container 0c2e385c58df026a5169aed66091bd01bc8a54dc11bfd7c98f4316c56436b4b8. Feb 13 15:42:20.243111 containerd[1509]: time="2025-02-13T15:42:20.242939463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtbkm,Uid:40b5c6b7-386b-4ba2-a3e0-71954b0a7b3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c2e385c58df026a5169aed66091bd01bc8a54dc11bfd7c98f4316c56436b4b8\"" Feb 13 15:42:20.243965 kubelet[2620]: E0213 15:42:20.243944 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:20.246725 containerd[1509]: time="2025-02-13T15:42:20.246593959Z" level=info msg="CreateContainer within sandbox \"0c2e385c58df026a5169aed66091bd01bc8a54dc11bfd7c98f4316c56436b4b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:42:20.274090 containerd[1509]: time="2025-02-13T15:42:20.274034429Z" level=info msg="CreateContainer within sandbox \"0c2e385c58df026a5169aed66091bd01bc8a54dc11bfd7c98f4316c56436b4b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"621dd2c59164afba40a39a40e69f4514a51073c71271fc1fcdbc9ffe13477ecb\"" Feb 13 15:42:20.275562 containerd[1509]: time="2025-02-13T15:42:20.275060482Z" level=info msg="StartContainer for \"621dd2c59164afba40a39a40e69f4514a51073c71271fc1fcdbc9ffe13477ecb\"" Feb 13 15:42:20.290586 sudo[1693]: pam_unix(sudo:session): session closed for user root Feb 13 15:42:20.292910 sshd[1692]: Connection closed by 10.0.0.1 port 57166 Feb 13 15:42:20.293595 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:20.300592 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:57166.service: Deactivated successfully. Feb 13 15:42:20.303874 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:42:20.304190 systemd[1]: session-7.scope: Consumed 4.490s CPU time, 210.3M memory peak. Feb 13 15:42:20.306279 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:42:20.315534 systemd[1]: Started cri-containerd-621dd2c59164afba40a39a40e69f4514a51073c71271fc1fcdbc9ffe13477ecb.scope - libcontainer container 621dd2c59164afba40a39a40e69f4514a51073c71271fc1fcdbc9ffe13477ecb. Feb 13 15:42:20.318143 systemd-logind[1495]: Removed session 7. Feb 13 15:42:20.353020 containerd[1509]: time="2025-02-13T15:42:20.352965481Z" level=info msg="StartContainer for \"621dd2c59164afba40a39a40e69f4514a51073c71271fc1fcdbc9ffe13477ecb\" returns successfully" Feb 13 15:42:20.502484 containerd[1509]: time="2025-02-13T15:42:20.502307686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-g7jjd,Uid:858ae04d-574f-410b-be65-03b863b27bb8,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:42:20.509851 kubelet[2620]: E0213 15:42:20.509803 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:20.534550 containerd[1509]: time="2025-02-13T15:42:20.533771418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:20.534550 containerd[1509]: time="2025-02-13T15:42:20.534426620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:20.534550 containerd[1509]: time="2025-02-13T15:42:20.534443695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:20.534790 containerd[1509]: time="2025-02-13T15:42:20.534537171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:20.557468 systemd[1]: Started cri-containerd-cc48a0480791d45daf85a20279bc62187611a8577f5009c9761cd8d84b8e3bc1.scope - libcontainer container cc48a0480791d45daf85a20279bc62187611a8577f5009c9761cd8d84b8e3bc1. Feb 13 15:42:20.595360 containerd[1509]: time="2025-02-13T15:42:20.595298453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-g7jjd,Uid:858ae04d-574f-410b-be65-03b863b27bb8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cc48a0480791d45daf85a20279bc62187611a8577f5009c9761cd8d84b8e3bc1\"" Feb 13 15:42:20.597183 containerd[1509]: time="2025-02-13T15:42:20.597142356Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:42:22.438928 kubelet[2620]: E0213 15:42:22.438887 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:22.453783 kubelet[2620]: I0213 15:42:22.453711 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jtbkm" podStartSLOduration=3.45368667 podStartE2EDuration="3.45368667s" podCreationTimestamp="2025-02-13 15:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:20.518621647 +0000 UTC m=+6.154385782" watchObservedRunningTime="2025-02-13 15:42:22.45368667 +0000 UTC m=+8.089450815" Feb 13 15:42:22.514198 kubelet[2620]: E0213 15:42:22.514108 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:23.516344 kubelet[2620]: E0213 15:42:23.516146 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:26.076140 kubelet[2620]: E0213 15:42:26.076113 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:28.279045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536047334.mount: Deactivated successfully. Feb 13 15:42:28.704883 containerd[1509]: time="2025-02-13T15:42:28.704790205Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:28.706804 containerd[1509]: time="2025-02-13T15:42:28.706760608Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 15:42:28.709542 containerd[1509]: time="2025-02-13T15:42:28.709491973Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:28.711673 containerd[1509]: time="2025-02-13T15:42:28.711631096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:28.712232 containerd[1509]: time="2025-02-13T15:42:28.712196005Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 8.115010049s" Feb 13 15:42:28.712232 containerd[1509]: time="2025-02-13T15:42:28.712225173Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 15:42:28.714347 containerd[1509]: time="2025-02-13T15:42:28.714283393Z" level=info msg="CreateContainer within sandbox \"cc48a0480791d45daf85a20279bc62187611a8577f5009c9761cd8d84b8e3bc1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:42:28.729406 containerd[1509]: time="2025-02-13T15:42:28.729353172Z" level=info msg="CreateContainer within sandbox \"cc48a0480791d45daf85a20279bc62187611a8577f5009c9761cd8d84b8e3bc1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e2cf41a38b386ddcb7fcfeead4110746817fce6729d2022c7b893c749d47d0c\"" Feb 13 15:42:28.729837 containerd[1509]: time="2025-02-13T15:42:28.729796195Z" level=info msg="StartContainer for \"9e2cf41a38b386ddcb7fcfeead4110746817fce6729d2022c7b893c749d47d0c\"" Feb 13 15:42:28.730083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395841612.mount: Deactivated successfully. Feb 13 15:42:28.756604 systemd[1]: Started cri-containerd-9e2cf41a38b386ddcb7fcfeead4110746817fce6729d2022c7b893c749d47d0c.scope - libcontainer container 9e2cf41a38b386ddcb7fcfeead4110746817fce6729d2022c7b893c749d47d0c. Feb 13 15:42:28.784871 containerd[1509]: time="2025-02-13T15:42:28.784828856Z" level=info msg="StartContainer for \"9e2cf41a38b386ddcb7fcfeead4110746817fce6729d2022c7b893c749d47d0c\" returns successfully" Feb 13 15:42:29.295486 kubelet[2620]: E0213 15:42:29.295455 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:29.525782 kubelet[2620]: E0213 15:42:29.525747 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:29.647939 kubelet[2620]: I0213 15:42:29.647556 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-g7jjd" podStartSLOduration=1.5309995440000002 podStartE2EDuration="9.647537425s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="2025-02-13 15:42:20.596504861 +0000 UTC m=+6.232268976" lastFinishedPulling="2025-02-13 15:42:28.713042732 +0000 UTC m=+14.348806857" observedRunningTime="2025-02-13 15:42:29.647358175 +0000 UTC m=+15.283122300" watchObservedRunningTime="2025-02-13 15:42:29.647537425 +0000 UTC m=+15.283301550" Feb 13 15:42:32.281563 systemd[1]: Created slice kubepods-besteffort-podcbd14d2d_f543_49ee_b055_f7ff3d06a9a4.slice - libcontainer container kubepods-besteffort-podcbd14d2d_f543_49ee_b055_f7ff3d06a9a4.slice. Feb 13 15:42:32.372516 kubelet[2620]: I0213 15:42:32.371940 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbw24\" (UniqueName: \"kubernetes.io/projected/cbd14d2d-f543-49ee-b055-f7ff3d06a9a4-kube-api-access-xbw24\") pod \"calico-typha-656966c65c-7dj7w\" (UID: \"cbd14d2d-f543-49ee-b055-f7ff3d06a9a4\") " pod="calico-system/calico-typha-656966c65c-7dj7w" Feb 13 15:42:32.372516 kubelet[2620]: I0213 15:42:32.372007 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cbd14d2d-f543-49ee-b055-f7ff3d06a9a4-typha-certs\") pod \"calico-typha-656966c65c-7dj7w\" (UID: \"cbd14d2d-f543-49ee-b055-f7ff3d06a9a4\") " pod="calico-system/calico-typha-656966c65c-7dj7w" Feb 13 15:42:32.372516 kubelet[2620]: I0213 15:42:32.372038 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd14d2d-f543-49ee-b055-f7ff3d06a9a4-tigera-ca-bundle\") pod \"calico-typha-656966c65c-7dj7w\" (UID: \"cbd14d2d-f543-49ee-b055-f7ff3d06a9a4\") " pod="calico-system/calico-typha-656966c65c-7dj7w" Feb 13 15:42:32.384695 systemd[1]: Created slice kubepods-besteffort-podaeee55cf_4201_4197_bf08_2d2bf0ed7ef2.slice - libcontainer container kubepods-besteffort-podaeee55cf_4201_4197_bf08_2d2bf0ed7ef2.slice. Feb 13 15:42:32.472456 kubelet[2620]: I0213 15:42:32.472396 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-tigera-ca-bundle\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472456 kubelet[2620]: I0213 15:42:32.472440 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-flexvol-driver-host\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472456 kubelet[2620]: I0213 15:42:32.472460 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-var-run-calico\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472704 kubelet[2620]: I0213 15:42:32.472526 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-cni-bin-dir\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472704 kubelet[2620]: I0213 15:42:32.472547 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-xtables-lock\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472704 kubelet[2620]: I0213 15:42:32.472567 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-policysync\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472704 kubelet[2620]: I0213 15:42:32.472586 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-node-certs\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472704 kubelet[2620]: I0213 15:42:32.472615 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-cni-net-dir\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472877 kubelet[2620]: I0213 15:42:32.472651 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-cni-log-dir\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472877 kubelet[2620]: I0213 15:42:32.472669 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-lib-modules\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472877 kubelet[2620]: I0213 15:42:32.472704 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-var-lib-calico\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.472877 kubelet[2620]: I0213 15:42:32.472812 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9rnk\" (UniqueName: \"kubernetes.io/projected/aeee55cf-4201-4197-bf08-2d2bf0ed7ef2-kube-api-access-r9rnk\") pod \"calico-node-chxfg\" (UID: \"aeee55cf-4201-4197-bf08-2d2bf0ed7ef2\") " pod="calico-system/calico-node-chxfg" Feb 13 15:42:32.577712 kubelet[2620]: E0213 15:42:32.577628 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:32.577712 kubelet[2620]: W0213 15:42:32.577648 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:32.577712 kubelet[2620]: E0213 15:42:32.577668 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:32.803389 kubelet[2620]: E0213 15:42:32.803351 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:32.803389 kubelet[2620]: W0213 15:42:32.803380 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:32.803562 kubelet[2620]: E0213 15:42:32.803402 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:32.803995 kubelet[2620]: E0213 15:42:32.803977 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:32.803995 kubelet[2620]: W0213 15:42:32.803992 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:32.804079 kubelet[2620]: E0213 15:42:32.804004 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:32.887179 kubelet[2620]: E0213 15:42:32.887140 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:32.887761 containerd[1509]: time="2025-02-13T15:42:32.887699893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656966c65c-7dj7w,Uid:cbd14d2d-f543-49ee-b055-f7ff3d06a9a4,Namespace:calico-system,Attempt:0,}" Feb 13 15:42:32.987239 kubelet[2620]: E0213 15:42:32.987208 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:32.987906 containerd[1509]: time="2025-02-13T15:42:32.987683769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-chxfg,Uid:aeee55cf-4201-4197-bf08-2d2bf0ed7ef2,Namespace:calico-system,Attempt:0,}" Feb 13 15:42:33.063240 kubelet[2620]: E0213 15:42:33.063186 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:33.158791 kubelet[2620]: E0213 15:42:33.158681 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.158791 kubelet[2620]: W0213 15:42:33.158703 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.158791 kubelet[2620]: E0213 15:42:33.158724 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.158970 kubelet[2620]: E0213 15:42:33.158942 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.158970 kubelet[2620]: W0213 15:42:33.158967 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.159040 kubelet[2620]: E0213 15:42:33.158978 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.159225 kubelet[2620]: E0213 15:42:33.159202 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.159225 kubelet[2620]: W0213 15:42:33.159213 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.159225 kubelet[2620]: E0213 15:42:33.159222 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.159481 kubelet[2620]: E0213 15:42:33.159453 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.159481 kubelet[2620]: W0213 15:42:33.159465 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.159481 kubelet[2620]: E0213 15:42:33.159473 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.159658 kubelet[2620]: E0213 15:42:33.159644 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.159658 kubelet[2620]: W0213 15:42:33.159655 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.159705 kubelet[2620]: E0213 15:42:33.159663 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.159822 kubelet[2620]: E0213 15:42:33.159809 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.159822 kubelet[2620]: W0213 15:42:33.159819 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.159874 kubelet[2620]: E0213 15:42:33.159826 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.159984 kubelet[2620]: E0213 15:42:33.159971 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.159984 kubelet[2620]: W0213 15:42:33.159980 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160030 kubelet[2620]: E0213 15:42:33.159988 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.160169 kubelet[2620]: E0213 15:42:33.160154 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.160169 kubelet[2620]: W0213 15:42:33.160166 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160221 kubelet[2620]: E0213 15:42:33.160175 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.160416 kubelet[2620]: E0213 15:42:33.160398 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.160416 kubelet[2620]: W0213 15:42:33.160411 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160475 kubelet[2620]: E0213 15:42:33.160421 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.160597 kubelet[2620]: E0213 15:42:33.160583 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.160597 kubelet[2620]: W0213 15:42:33.160593 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160650 kubelet[2620]: E0213 15:42:33.160600 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.160757 kubelet[2620]: E0213 15:42:33.160744 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.160757 kubelet[2620]: W0213 15:42:33.160754 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160805 kubelet[2620]: E0213 15:42:33.160763 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.160919 kubelet[2620]: E0213 15:42:33.160906 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.160919 kubelet[2620]: W0213 15:42:33.160915 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.160967 kubelet[2620]: E0213 15:42:33.160922 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.161089 kubelet[2620]: E0213 15:42:33.161075 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.161089 kubelet[2620]: W0213 15:42:33.161085 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.161136 kubelet[2620]: E0213 15:42:33.161093 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.161247 kubelet[2620]: E0213 15:42:33.161234 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.161247 kubelet[2620]: W0213 15:42:33.161244 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.161301 kubelet[2620]: E0213 15:42:33.161255 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.161468 kubelet[2620]: E0213 15:42:33.161454 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.161468 kubelet[2620]: W0213 15:42:33.161465 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.161518 kubelet[2620]: E0213 15:42:33.161472 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.161646 kubelet[2620]: E0213 15:42:33.161632 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.161646 kubelet[2620]: W0213 15:42:33.161642 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.161692 kubelet[2620]: E0213 15:42:33.161649 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.161829 kubelet[2620]: E0213 15:42:33.161814 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.161829 kubelet[2620]: W0213 15:42:33.161827 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.161877 kubelet[2620]: E0213 15:42:33.161836 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.162017 kubelet[2620]: E0213 15:42:33.162004 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.162017 kubelet[2620]: W0213 15:42:33.162014 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.162063 kubelet[2620]: E0213 15:42:33.162021 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.162177 kubelet[2620]: E0213 15:42:33.162164 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.162177 kubelet[2620]: W0213 15:42:33.162173 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.162217 kubelet[2620]: E0213 15:42:33.162180 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.162376 kubelet[2620]: E0213 15:42:33.162360 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.162376 kubelet[2620]: W0213 15:42:33.162372 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.162439 kubelet[2620]: E0213 15:42:33.162381 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.177638 kubelet[2620]: E0213 15:42:33.177613 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.177638 kubelet[2620]: W0213 15:42:33.177627 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.177638 kubelet[2620]: E0213 15:42:33.177638 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.177729 kubelet[2620]: I0213 15:42:33.177659 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb72e478-27f9-4b96-ac63-312fc0de0c3b-registration-dir\") pod \"csi-node-driver-wtt2x\" (UID: \"fb72e478-27f9-4b96-ac63-312fc0de0c3b\") " pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:33.177876 kubelet[2620]: E0213 15:42:33.177857 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.177876 kubelet[2620]: W0213 15:42:33.177873 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.177937 kubelet[2620]: E0213 15:42:33.177887 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.178104 kubelet[2620]: E0213 15:42:33.178089 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.178104 kubelet[2620]: W0213 15:42:33.178100 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.178169 kubelet[2620]: E0213 15:42:33.178112 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.178299 kubelet[2620]: E0213 15:42:33.178285 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.178299 kubelet[2620]: W0213 15:42:33.178296 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.178299 kubelet[2620]: E0213 15:42:33.178303 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.178458 kubelet[2620]: I0213 15:42:33.178346 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb72e478-27f9-4b96-ac63-312fc0de0c3b-socket-dir\") pod \"csi-node-driver-wtt2x\" (UID: \"fb72e478-27f9-4b96-ac63-312fc0de0c3b\") " pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:33.178560 kubelet[2620]: E0213 15:42:33.178543 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.178560 kubelet[2620]: W0213 15:42:33.178556 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.178617 kubelet[2620]: E0213 15:42:33.178569 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.178617 kubelet[2620]: I0213 15:42:33.178582 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb72e478-27f9-4b96-ac63-312fc0de0c3b-varrun\") pod \"csi-node-driver-wtt2x\" (UID: \"fb72e478-27f9-4b96-ac63-312fc0de0c3b\") " pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:33.178791 kubelet[2620]: E0213 15:42:33.178766 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.178791 kubelet[2620]: W0213 15:42:33.178783 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.178843 kubelet[2620]: E0213 15:42:33.178796 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.178977 kubelet[2620]: E0213 15:42:33.178961 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.178977 kubelet[2620]: W0213 15:42:33.178974 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179026 kubelet[2620]: E0213 15:42:33.178987 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.179153 kubelet[2620]: E0213 15:42:33.179140 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.179153 kubelet[2620]: W0213 15:42:33.179150 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179194 kubelet[2620]: E0213 15:42:33.179161 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.179194 kubelet[2620]: I0213 15:42:33.179177 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb72e478-27f9-4b96-ac63-312fc0de0c3b-kubelet-dir\") pod \"csi-node-driver-wtt2x\" (UID: \"fb72e478-27f9-4b96-ac63-312fc0de0c3b\") " pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:33.179373 kubelet[2620]: E0213 15:42:33.179357 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.179373 kubelet[2620]: W0213 15:42:33.179370 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179416 kubelet[2620]: E0213 15:42:33.179383 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.179562 kubelet[2620]: E0213 15:42:33.179537 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.179562 kubelet[2620]: W0213 15:42:33.179559 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179612 kubelet[2620]: E0213 15:42:33.179570 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.179751 kubelet[2620]: E0213 15:42:33.179735 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.179751 kubelet[2620]: W0213 15:42:33.179748 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179792 kubelet[2620]: E0213 15:42:33.179760 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.179792 kubelet[2620]: I0213 15:42:33.179775 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcx4\" (UniqueName: \"kubernetes.io/projected/fb72e478-27f9-4b96-ac63-312fc0de0c3b-kube-api-access-rdcx4\") pod \"csi-node-driver-wtt2x\" (UID: \"fb72e478-27f9-4b96-ac63-312fc0de0c3b\") " pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:33.179958 kubelet[2620]: E0213 15:42:33.179931 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.179958 kubelet[2620]: W0213 15:42:33.179944 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.179958 kubelet[2620]: E0213 15:42:33.179955 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.180136 kubelet[2620]: E0213 15:42:33.180123 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.180136 kubelet[2620]: W0213 15:42:33.180133 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.180181 kubelet[2620]: E0213 15:42:33.180144 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.180370 kubelet[2620]: E0213 15:42:33.180347 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.180370 kubelet[2620]: W0213 15:42:33.180361 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.180370 kubelet[2620]: E0213 15:42:33.180371 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.180617 kubelet[2620]: E0213 15:42:33.180592 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.180650 kubelet[2620]: W0213 15:42:33.180614 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.180650 kubelet[2620]: E0213 15:42:33.180639 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.280834 kubelet[2620]: E0213 15:42:33.280781 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.280834 kubelet[2620]: W0213 15:42:33.280817 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.281036 kubelet[2620]: E0213 15:42:33.280844 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.281174 kubelet[2620]: E0213 15:42:33.281155 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.281174 kubelet[2620]: W0213 15:42:33.281170 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.281235 kubelet[2620]: E0213 15:42:33.281186 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.281465 kubelet[2620]: E0213 15:42:33.281442 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.281465 kubelet[2620]: W0213 15:42:33.281459 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.281523 kubelet[2620]: E0213 15:42:33.281479 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.281723 kubelet[2620]: E0213 15:42:33.281711 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.281752 kubelet[2620]: W0213 15:42:33.281723 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.281752 kubelet[2620]: E0213 15:42:33.281737 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.281961 kubelet[2620]: E0213 15:42:33.281950 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.281961 kubelet[2620]: W0213 15:42:33.281958 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.282023 kubelet[2620]: E0213 15:42:33.281971 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.282235 kubelet[2620]: E0213 15:42:33.282223 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.282235 kubelet[2620]: W0213 15:42:33.282232 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.282291 kubelet[2620]: E0213 15:42:33.282245 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.282503 kubelet[2620]: E0213 15:42:33.282485 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.282503 kubelet[2620]: W0213 15:42:33.282495 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.282549 kubelet[2620]: E0213 15:42:33.282508 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.282711 kubelet[2620]: E0213 15:42:33.282700 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.282711 kubelet[2620]: W0213 15:42:33.282708 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.282766 kubelet[2620]: E0213 15:42:33.282739 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.282933 kubelet[2620]: E0213 15:42:33.282922 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.282933 kubelet[2620]: W0213 15:42:33.282932 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.282977 kubelet[2620]: E0213 15:42:33.282963 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.283161 kubelet[2620]: E0213 15:42:33.283134 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.283161 kubelet[2620]: W0213 15:42:33.283145 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.283161 kubelet[2620]: E0213 15:42:33.283156 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.283445 kubelet[2620]: E0213 15:42:33.283400 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.283445 kubelet[2620]: W0213 15:42:33.283407 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.283445 kubelet[2620]: E0213 15:42:33.283419 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.283639 kubelet[2620]: E0213 15:42:33.283620 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.283639 kubelet[2620]: W0213 15:42:33.283630 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.283704 kubelet[2620]: E0213 15:42:33.283645 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.283886 kubelet[2620]: E0213 15:42:33.283869 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.283886 kubelet[2620]: W0213 15:42:33.283879 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.283954 kubelet[2620]: E0213 15:42:33.283891 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.284090 kubelet[2620]: E0213 15:42:33.284074 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.284090 kubelet[2620]: W0213 15:42:33.284085 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.284156 kubelet[2620]: E0213 15:42:33.284097 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.284357 kubelet[2620]: E0213 15:42:33.284340 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.284357 kubelet[2620]: W0213 15:42:33.284351 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.284433 kubelet[2620]: E0213 15:42:33.284376 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.284576 kubelet[2620]: E0213 15:42:33.284559 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.284576 kubelet[2620]: W0213 15:42:33.284572 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.284642 kubelet[2620]: E0213 15:42:33.284598 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.284788 kubelet[2620]: E0213 15:42:33.284772 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.284788 kubelet[2620]: W0213 15:42:33.284782 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.284849 kubelet[2620]: E0213 15:42:33.284800 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.284996 kubelet[2620]: E0213 15:42:33.284981 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.284996 kubelet[2620]: W0213 15:42:33.284990 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.285057 kubelet[2620]: E0213 15:42:33.285001 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.285230 kubelet[2620]: E0213 15:42:33.285213 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.285230 kubelet[2620]: W0213 15:42:33.285226 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.285298 kubelet[2620]: E0213 15:42:33.285240 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.285481 kubelet[2620]: E0213 15:42:33.285464 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.285481 kubelet[2620]: W0213 15:42:33.285475 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.285547 kubelet[2620]: E0213 15:42:33.285488 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.285744 kubelet[2620]: E0213 15:42:33.285727 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.285744 kubelet[2620]: W0213 15:42:33.285738 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.285807 kubelet[2620]: E0213 15:42:33.285750 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.286058 kubelet[2620]: E0213 15:42:33.286036 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.286106 kubelet[2620]: W0213 15:42:33.286055 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.286106 kubelet[2620]: E0213 15:42:33.286079 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.286402 kubelet[2620]: E0213 15:42:33.286384 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.286402 kubelet[2620]: W0213 15:42:33.286396 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.286402 kubelet[2620]: E0213 15:42:33.286405 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.286649 kubelet[2620]: E0213 15:42:33.286632 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.286649 kubelet[2620]: W0213 15:42:33.286642 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.286649 kubelet[2620]: E0213 15:42:33.286650 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.400488 kubelet[2620]: E0213 15:42:33.400450 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.400488 kubelet[2620]: W0213 15:42:33.400467 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.400488 kubelet[2620]: E0213 15:42:33.400481 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.424221 kubelet[2620]: E0213 15:42:33.424131 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:33.424221 kubelet[2620]: W0213 15:42:33.424152 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:33.424221 kubelet[2620]: E0213 15:42:33.424171 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:33.578671 containerd[1509]: time="2025-02-13T15:42:33.578584700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:33.578671 containerd[1509]: time="2025-02-13T15:42:33.578634259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:33.578671 containerd[1509]: time="2025-02-13T15:42:33.578645401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:33.578804 containerd[1509]: time="2025-02-13T15:42:33.578711232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:33.600460 systemd[1]: Started cri-containerd-a14d91014b66b9d2b1eca7916d55540701e3752c00b0702f3ed64c1ab9354d2b.scope - libcontainer container a14d91014b66b9d2b1eca7916d55540701e3752c00b0702f3ed64c1ab9354d2b. Feb 13 15:42:33.636865 containerd[1509]: time="2025-02-13T15:42:33.636815687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656966c65c-7dj7w,Uid:cbd14d2d-f543-49ee-b055-f7ff3d06a9a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a14d91014b66b9d2b1eca7916d55540701e3752c00b0702f3ed64c1ab9354d2b\"" Feb 13 15:42:33.637459 kubelet[2620]: E0213 15:42:33.637428 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:33.638571 containerd[1509]: time="2025-02-13T15:42:33.638548346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:42:34.475857 kubelet[2620]: E0213 15:42:34.475787 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:34.635303 containerd[1509]: time="2025-02-13T15:42:34.635186457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:34.635303 containerd[1509]: time="2025-02-13T15:42:34.635246176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:34.635303 containerd[1509]: time="2025-02-13T15:42:34.635258460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:34.635780 containerd[1509]: time="2025-02-13T15:42:34.635368309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:34.658481 systemd[1]: Started cri-containerd-95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e.scope - libcontainer container 95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e. Feb 13 15:42:34.685469 containerd[1509]: time="2025-02-13T15:42:34.685405818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-chxfg,Uid:aeee55cf-4201-4197-bf08-2d2bf0ed7ef2,Namespace:calico-system,Attempt:0,} returns sandbox id \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\"" Feb 13 15:42:34.686068 kubelet[2620]: E0213 15:42:34.686026 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:36.476182 kubelet[2620]: E0213 15:42:36.476110 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:37.173104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132866040.mount: Deactivated successfully. Feb 13 15:42:38.476063 kubelet[2620]: E0213 15:42:38.476014 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:39.410350 containerd[1509]: time="2025-02-13T15:42:39.410255879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:39.414945 containerd[1509]: time="2025-02-13T15:42:39.414735207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 15:42:39.419022 containerd[1509]: time="2025-02-13T15:42:39.418946728Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:39.422416 containerd[1509]: time="2025-02-13T15:42:39.422374236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:39.423308 containerd[1509]: time="2025-02-13T15:42:39.423241714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.784651035s" Feb 13 15:42:39.423375 containerd[1509]: time="2025-02-13T15:42:39.423305660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:42:39.424520 containerd[1509]: time="2025-02-13T15:42:39.424481815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:42:39.434191 containerd[1509]: time="2025-02-13T15:42:39.434136091Z" level=info msg="CreateContainer within sandbox \"a14d91014b66b9d2b1eca7916d55540701e3752c00b0702f3ed64c1ab9354d2b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:42:39.453374 containerd[1509]: time="2025-02-13T15:42:39.453284199Z" level=info msg="CreateContainer within sandbox \"a14d91014b66b9d2b1eca7916d55540701e3752c00b0702f3ed64c1ab9354d2b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b7807128733b9d64dffd276ccc37da37fdc03a193274284bb10269aa4071fcc\"" Feb 13 15:42:39.453961 containerd[1509]: time="2025-02-13T15:42:39.453933577Z" level=info msg="StartContainer for \"1b7807128733b9d64dffd276ccc37da37fdc03a193274284bb10269aa4071fcc\"" Feb 13 15:42:39.482522 systemd[1]: Started cri-containerd-1b7807128733b9d64dffd276ccc37da37fdc03a193274284bb10269aa4071fcc.scope - libcontainer container 1b7807128733b9d64dffd276ccc37da37fdc03a193274284bb10269aa4071fcc. Feb 13 15:42:39.536394 containerd[1509]: time="2025-02-13T15:42:39.536279722Z" level=info msg="StartContainer for \"1b7807128733b9d64dffd276ccc37da37fdc03a193274284bb10269aa4071fcc\" returns successfully" Feb 13 15:42:39.543587 kubelet[2620]: E0213 15:42:39.543541 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:39.593137 kubelet[2620]: I0213 15:42:39.593084 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-656966c65c-7dj7w" podStartSLOduration=1.8069553900000002 podStartE2EDuration="7.593064775s" podCreationTimestamp="2025-02-13 15:42:32 +0000 UTC" firstStartedPulling="2025-02-13 15:42:33.638159902 +0000 UTC m=+19.273924027" lastFinishedPulling="2025-02-13 15:42:39.424269287 +0000 UTC m=+25.060033412" observedRunningTime="2025-02-13 15:42:39.592497478 +0000 UTC m=+25.228261603" watchObservedRunningTime="2025-02-13 15:42:39.593064775 +0000 UTC m=+25.228828900" Feb 13 15:42:39.604848 kubelet[2620]: E0213 15:42:39.604779 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.604848 kubelet[2620]: W0213 15:42:39.604821 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.604848 kubelet[2620]: E0213 15:42:39.604847 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.605224 kubelet[2620]: E0213 15:42:39.605191 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.605224 kubelet[2620]: W0213 15:42:39.605205 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.605224 kubelet[2620]: E0213 15:42:39.605215 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.605822 kubelet[2620]: E0213 15:42:39.605794 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.605966 kubelet[2620]: W0213 15:42:39.605947 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.606039 kubelet[2620]: E0213 15:42:39.605969 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.606348 kubelet[2620]: E0213 15:42:39.606328 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.606348 kubelet[2620]: W0213 15:42:39.606342 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.606473 kubelet[2620]: E0213 15:42:39.606352 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.607032 kubelet[2620]: E0213 15:42:39.606962 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.607032 kubelet[2620]: W0213 15:42:39.607008 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.607032 kubelet[2620]: E0213 15:42:39.607020 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.607347 kubelet[2620]: E0213 15:42:39.607301 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.607347 kubelet[2620]: W0213 15:42:39.607333 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.607347 kubelet[2620]: E0213 15:42:39.607344 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.607752 kubelet[2620]: E0213 15:42:39.607701 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.607752 kubelet[2620]: W0213 15:42:39.607734 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.607873 kubelet[2620]: E0213 15:42:39.607766 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.608115 kubelet[2620]: E0213 15:42:39.608084 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.608115 kubelet[2620]: W0213 15:42:39.608100 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.608115 kubelet[2620]: E0213 15:42:39.608111 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.608421 kubelet[2620]: E0213 15:42:39.608391 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.608421 kubelet[2620]: W0213 15:42:39.608407 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.608421 kubelet[2620]: E0213 15:42:39.608419 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.608770 kubelet[2620]: E0213 15:42:39.608752 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.608827 kubelet[2620]: W0213 15:42:39.608771 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.608827 kubelet[2620]: E0213 15:42:39.608783 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.609051 kubelet[2620]: E0213 15:42:39.609033 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.609051 kubelet[2620]: W0213 15:42:39.609049 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.609125 kubelet[2620]: E0213 15:42:39.609061 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.609366 kubelet[2620]: E0213 15:42:39.609310 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.609366 kubelet[2620]: W0213 15:42:39.609341 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.609366 kubelet[2620]: E0213 15:42:39.609353 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.609702 kubelet[2620]: E0213 15:42:39.609607 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.609702 kubelet[2620]: W0213 15:42:39.609627 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.609702 kubelet[2620]: E0213 15:42:39.609638 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.609994 kubelet[2620]: E0213 15:42:39.609974 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.610030 kubelet[2620]: W0213 15:42:39.609990 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.610061 kubelet[2620]: E0213 15:42:39.610033 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.610309 kubelet[2620]: E0213 15:42:39.610293 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.610309 kubelet[2620]: W0213 15:42:39.610306 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.610396 kubelet[2620]: E0213 15:42:39.610343 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.629155 kubelet[2620]: E0213 15:42:39.628961 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.629155 kubelet[2620]: W0213 15:42:39.628990 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.629155 kubelet[2620]: E0213 15:42:39.629015 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.629681 kubelet[2620]: E0213 15:42:39.629575 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.629681 kubelet[2620]: W0213 15:42:39.629588 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.629681 kubelet[2620]: E0213 15:42:39.629607 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.631832 kubelet[2620]: E0213 15:42:39.631764 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.631832 kubelet[2620]: W0213 15:42:39.631787 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.632157 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.632560 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637118 kubelet[2620]: W0213 15:42:39.632577 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.632595 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.632933 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637118 kubelet[2620]: W0213 15:42:39.632945 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.633028 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.633350 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637118 kubelet[2620]: W0213 15:42:39.633360 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637118 kubelet[2620]: E0213 15:42:39.633470 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.633886 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637484 kubelet[2620]: W0213 15:42:39.633897 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.634015 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.634964 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637484 kubelet[2620]: W0213 15:42:39.634975 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.635001 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.635309 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637484 kubelet[2620]: W0213 15:42:39.635336 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.635348 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637484 kubelet[2620]: E0213 15:42:39.636059 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637902 kubelet[2620]: W0213 15:42:39.636070 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.637902 kubelet[2620]: E0213 15:42:39.636095 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.637902 kubelet[2620]: E0213 15:42:39.636763 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.637902 kubelet[2620]: W0213 15:42:39.636773 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.638899 kubelet[2620]: E0213 15:42:39.638124 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.638899 kubelet[2620]: E0213 15:42:39.638728 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.638899 kubelet[2620]: W0213 15:42:39.638739 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.638899 kubelet[2620]: E0213 15:42:39.638872 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.639836 kubelet[2620]: E0213 15:42:39.639821 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.640058 kubelet[2620]: W0213 15:42:39.639918 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.640177 kubelet[2620]: E0213 15:42:39.640161 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.640479 kubelet[2620]: E0213 15:42:39.640466 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.640651 kubelet[2620]: W0213 15:42:39.640634 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.641437 kubelet[2620]: E0213 15:42:39.640833 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.641691 kubelet[2620]: E0213 15:42:39.641679 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.641760 kubelet[2620]: W0213 15:42:39.641748 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.641848 kubelet[2620]: E0213 15:42:39.641834 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.642140 kubelet[2620]: E0213 15:42:39.642127 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.642217 kubelet[2620]: W0213 15:42:39.642198 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.642335 kubelet[2620]: E0213 15:42:39.642272 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.642994 kubelet[2620]: E0213 15:42:39.642614 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.642994 kubelet[2620]: W0213 15:42:39.642628 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.642994 kubelet[2620]: E0213 15:42:39.642640 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:39.643501 kubelet[2620]: E0213 15:42:39.643488 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:39.643580 kubelet[2620]: W0213 15:42:39.643567 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:39.643639 kubelet[2620]: E0213 15:42:39.643627 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.475599 kubelet[2620]: E0213 15:42:40.475525 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:40.547057 kubelet[2620]: I0213 15:42:40.546950 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:42:40.547575 kubelet[2620]: E0213 15:42:40.547499 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:40.616023 kubelet[2620]: E0213 15:42:40.615994 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.616023 kubelet[2620]: W0213 15:42:40.616014 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.616023 kubelet[2620]: E0213 15:42:40.616034 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.616369 kubelet[2620]: E0213 15:42:40.616335 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.616369 kubelet[2620]: W0213 15:42:40.616349 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.616369 kubelet[2620]: E0213 15:42:40.616358 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.616647 kubelet[2620]: E0213 15:42:40.616624 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.616647 kubelet[2620]: W0213 15:42:40.616635 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.616647 kubelet[2620]: E0213 15:42:40.616643 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.616868 kubelet[2620]: E0213 15:42:40.616854 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.616868 kubelet[2620]: W0213 15:42:40.616864 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.616936 kubelet[2620]: E0213 15:42:40.616880 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.617089 kubelet[2620]: E0213 15:42:40.617070 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.617089 kubelet[2620]: W0213 15:42:40.617080 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.617136 kubelet[2620]: E0213 15:42:40.617096 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.617295 kubelet[2620]: E0213 15:42:40.617281 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.617295 kubelet[2620]: W0213 15:42:40.617291 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.617380 kubelet[2620]: E0213 15:42:40.617300 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.617511 kubelet[2620]: E0213 15:42:40.617497 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.617511 kubelet[2620]: W0213 15:42:40.617506 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.617563 kubelet[2620]: E0213 15:42:40.617514 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.617712 kubelet[2620]: E0213 15:42:40.617700 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.617712 kubelet[2620]: W0213 15:42:40.617709 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.617762 kubelet[2620]: E0213 15:42:40.617717 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.617921 kubelet[2620]: E0213 15:42:40.617908 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.617921 kubelet[2620]: W0213 15:42:40.617918 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.617968 kubelet[2620]: E0213 15:42:40.617926 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.618106 kubelet[2620]: E0213 15:42:40.618093 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.618106 kubelet[2620]: W0213 15:42:40.618102 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.618163 kubelet[2620]: E0213 15:42:40.618109 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.618295 kubelet[2620]: E0213 15:42:40.618282 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.618295 kubelet[2620]: W0213 15:42:40.618293 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.618361 kubelet[2620]: E0213 15:42:40.618303 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.618545 kubelet[2620]: E0213 15:42:40.618529 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.618545 kubelet[2620]: W0213 15:42:40.618542 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.618595 kubelet[2620]: E0213 15:42:40.618551 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.618795 kubelet[2620]: E0213 15:42:40.618781 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.618795 kubelet[2620]: W0213 15:42:40.618792 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.618854 kubelet[2620]: E0213 15:42:40.618801 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.619018 kubelet[2620]: E0213 15:42:40.619003 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.619018 kubelet[2620]: W0213 15:42:40.619014 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.619091 kubelet[2620]: E0213 15:42:40.619023 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.619263 kubelet[2620]: E0213 15:42:40.619250 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.619263 kubelet[2620]: W0213 15:42:40.619261 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.619310 kubelet[2620]: E0213 15:42:40.619269 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.638801 kubelet[2620]: E0213 15:42:40.638762 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.638801 kubelet[2620]: W0213 15:42:40.638790 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.638912 kubelet[2620]: E0213 15:42:40.638822 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.639106 kubelet[2620]: E0213 15:42:40.639088 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.639106 kubelet[2620]: W0213 15:42:40.639099 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.639164 kubelet[2620]: E0213 15:42:40.639115 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.639367 kubelet[2620]: E0213 15:42:40.639345 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.639367 kubelet[2620]: W0213 15:42:40.639364 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.639431 kubelet[2620]: E0213 15:42:40.639380 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.639563 kubelet[2620]: E0213 15:42:40.639550 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.639563 kubelet[2620]: W0213 15:42:40.639559 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.639617 kubelet[2620]: E0213 15:42:40.639572 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.639831 kubelet[2620]: E0213 15:42:40.639798 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.639831 kubelet[2620]: W0213 15:42:40.639824 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.640008 kubelet[2620]: E0213 15:42:40.639857 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.640208 kubelet[2620]: E0213 15:42:40.640181 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.640208 kubelet[2620]: W0213 15:42:40.640199 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.640268 kubelet[2620]: E0213 15:42:40.640217 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.640562 kubelet[2620]: E0213 15:42:40.640546 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.640562 kubelet[2620]: W0213 15:42:40.640561 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.640640 kubelet[2620]: E0213 15:42:40.640577 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.640845 kubelet[2620]: E0213 15:42:40.640827 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.640845 kubelet[2620]: W0213 15:42:40.640844 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.640922 kubelet[2620]: E0213 15:42:40.640864 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.641111 kubelet[2620]: E0213 15:42:40.641093 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.641111 kubelet[2620]: W0213 15:42:40.641108 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.641201 kubelet[2620]: E0213 15:42:40.641127 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.641463 kubelet[2620]: E0213 15:42:40.641445 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.641463 kubelet[2620]: W0213 15:42:40.641460 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.641512 kubelet[2620]: E0213 15:42:40.641478 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.641688 kubelet[2620]: E0213 15:42:40.641675 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.641716 kubelet[2620]: W0213 15:42:40.641686 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.641716 kubelet[2620]: E0213 15:42:40.641702 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.641953 kubelet[2620]: E0213 15:42:40.641942 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.641990 kubelet[2620]: W0213 15:42:40.641953 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.641990 kubelet[2620]: E0213 15:42:40.641982 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.642186 kubelet[2620]: E0213 15:42:40.642170 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.642186 kubelet[2620]: W0213 15:42:40.642183 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.642258 kubelet[2620]: E0213 15:42:40.642194 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.642479 kubelet[2620]: E0213 15:42:40.642465 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.642621 kubelet[2620]: W0213 15:42:40.642478 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.642621 kubelet[2620]: E0213 15:42:40.642494 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.642724 kubelet[2620]: E0213 15:42:40.642712 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.642752 kubelet[2620]: W0213 15:42:40.642724 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.642752 kubelet[2620]: E0213 15:42:40.642740 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.642995 kubelet[2620]: E0213 15:42:40.642972 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.642995 kubelet[2620]: W0213 15:42:40.642986 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.643043 kubelet[2620]: E0213 15:42:40.643002 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.643267 kubelet[2620]: E0213 15:42:40.643247 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.643267 kubelet[2620]: W0213 15:42:40.643260 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.643354 kubelet[2620]: E0213 15:42:40.643270 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:40.643629 kubelet[2620]: E0213 15:42:40.643614 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:42:40.643629 kubelet[2620]: W0213 15:42:40.643625 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:42:40.643683 kubelet[2620]: E0213 15:42:40.643633 2620 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:42:42.475119 kubelet[2620]: E0213 15:42:42.475072 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:42.508010 containerd[1509]: time="2025-02-13T15:42:42.507933888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:42.509059 containerd[1509]: time="2025-02-13T15:42:42.508979348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 15:42:42.510225 containerd[1509]: time="2025-02-13T15:42:42.510183770Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:42.515855 containerd[1509]: time="2025-02-13T15:42:42.515818928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:42.516576 containerd[1509]: time="2025-02-13T15:42:42.516532757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 3.092014361s" Feb 13 15:42:42.516653 containerd[1509]: time="2025-02-13T15:42:42.516580091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:42:42.519170 containerd[1509]: time="2025-02-13T15:42:42.519126553Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:42:42.538754 containerd[1509]: time="2025-02-13T15:42:42.538709599Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914\"" Feb 13 15:42:42.539372 containerd[1509]: time="2025-02-13T15:42:42.539346959Z" level=info msg="StartContainer for \"f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914\"" Feb 13 15:42:42.576450 systemd[1]: Started cri-containerd-f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914.scope - libcontainer container f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914. Feb 13 15:42:43.127661 systemd[1]: cri-containerd-f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914.scope: Deactivated successfully. Feb 13 15:42:43.127983 systemd[1]: cri-containerd-f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914.scope: Consumed 35ms CPU time, 8.1M memory peak, 6.3M written to disk. Feb 13 15:42:43.305490 containerd[1509]: time="2025-02-13T15:42:43.305433206Z" level=info msg="StartContainer for \"f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914\" returns successfully" Feb 13 15:42:43.406819 containerd[1509]: time="2025-02-13T15:42:43.406740337Z" level=info msg="shim disconnected" id=f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914 namespace=k8s.io Feb 13 15:42:43.406819 containerd[1509]: time="2025-02-13T15:42:43.406813310Z" level=warning msg="cleaning up after shim disconnected" id=f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914 namespace=k8s.io Feb 13 15:42:43.406819 containerd[1509]: time="2025-02-13T15:42:43.406823700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:43.532103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4ac5e2dacaf484f70299c9f9c2f474410b2376cfb7e0b512bfb2d7a6135d914-rootfs.mount: Deactivated successfully. Feb 13 15:42:43.554351 kubelet[2620]: E0213 15:42:43.553498 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:43.554962 containerd[1509]: time="2025-02-13T15:42:43.554242753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:42:44.475857 kubelet[2620]: E0213 15:42:44.475776 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:45.233794 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:49792.service - OpenSSH per-connection server daemon (10.0.0.1:49792). Feb 13 15:42:45.272548 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 49792 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:45.274070 sshd-session[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:45.279149 systemd-logind[1495]: New session 8 of user core. Feb 13 15:42:45.284482 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:42:45.404820 sshd[3388]: Connection closed by 10.0.0.1 port 49792 Feb 13 15:42:45.405162 sshd-session[3386]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:45.408923 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:49792.service: Deactivated successfully. Feb 13 15:42:45.410944 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:42:45.411735 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:42:45.412708 systemd-logind[1495]: Removed session 8. Feb 13 15:42:46.475775 kubelet[2620]: E0213 15:42:46.475697 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:48.483540 kubelet[2620]: E0213 15:42:48.483487 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:50.422117 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). Feb 13 15:42:50.504783 kubelet[2620]: E0213 15:42:50.504714 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:50.527392 sshd[3403]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:50.528871 sshd-session[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:50.533481 systemd-logind[1495]: New session 9 of user core. Feb 13 15:42:50.538458 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:42:50.672768 sshd[3407]: Connection closed by 10.0.0.1 port 40160 Feb 13 15:42:50.673062 sshd-session[3403]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:50.677432 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:40160.service: Deactivated successfully. Feb 13 15:42:50.679505 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:42:50.680259 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:42:50.681248 systemd-logind[1495]: Removed session 9. Feb 13 15:42:51.855548 containerd[1509]: time="2025-02-13T15:42:51.855486483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:51.856231 containerd[1509]: time="2025-02-13T15:42:51.856164992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:42:51.857401 containerd[1509]: time="2025-02-13T15:42:51.857368531Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:51.859678 containerd[1509]: time="2025-02-13T15:42:51.859640899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:51.860372 containerd[1509]: time="2025-02-13T15:42:51.860343144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 8.306066184s" Feb 13 15:42:51.860372 containerd[1509]: time="2025-02-13T15:42:51.860372932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:42:51.862514 containerd[1509]: time="2025-02-13T15:42:51.862483355Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:42:51.882982 containerd[1509]: time="2025-02-13T15:42:51.882948281Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d\"" Feb 13 15:42:51.883450 containerd[1509]: time="2025-02-13T15:42:51.883406962Z" level=info msg="StartContainer for \"c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d\"" Feb 13 15:42:51.924500 systemd[1]: Started cri-containerd-c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d.scope - libcontainer container c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d. Feb 13 15:42:51.958767 containerd[1509]: time="2025-02-13T15:42:51.958724547Z" level=info msg="StartContainer for \"c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d\" returns successfully" Feb 13 15:42:52.536339 kubelet[2620]: E0213 15:42:52.536258 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:52.567840 kubelet[2620]: E0213 15:42:52.567804 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:52.733616 kubelet[2620]: I0213 15:42:52.733569 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:42:52.733984 kubelet[2620]: E0213 15:42:52.733959 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:53.570008 kubelet[2620]: E0213 15:42:53.569944 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:53.570801 kubelet[2620]: E0213 15:42:53.570763 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.152567 containerd[1509]: time="2025-02-13T15:42:54.152252230Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:42:54.156211 systemd[1]: cri-containerd-c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d.scope: Deactivated successfully. Feb 13 15:42:54.156588 systemd[1]: cri-containerd-c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d.scope: Consumed 572ms CPU time, 159M memory peak, 40K read from disk, 151M written to disk. Feb 13 15:42:54.178249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d-rootfs.mount: Deactivated successfully. Feb 13 15:42:54.196459 containerd[1509]: time="2025-02-13T15:42:54.196375005Z" level=info msg="shim disconnected" id=c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d namespace=k8s.io Feb 13 15:42:54.196459 containerd[1509]: time="2025-02-13T15:42:54.196441954Z" level=warning msg="cleaning up after shim disconnected" id=c57edb71e75d50ad0d437b5fce68718b519eaf3d2824abb533cb037fa8b99b8d namespace=k8s.io Feb 13 15:42:54.196459 containerd[1509]: time="2025-02-13T15:42:54.196452876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:54.201175 kubelet[2620]: I0213 15:42:54.200456 2620 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 15:42:54.245251 systemd[1]: Created slice kubepods-burstable-pod5fc4b308_ecdc_4011_b8c8_b82b8a22d611.slice - libcontainer container kubepods-burstable-pod5fc4b308_ecdc_4011_b8c8_b82b8a22d611.slice. Feb 13 15:42:54.252993 systemd[1]: Created slice kubepods-besteffort-pod75e9cbc2_59c3_4f8f_a732_f7aed42e478e.slice - libcontainer container kubepods-besteffort-pod75e9cbc2_59c3_4f8f_a732_f7aed42e478e.slice. Feb 13 15:42:54.258470 systemd[1]: Created slice kubepods-burstable-podba5086be_6515_43a3_aac4_e336b6f0df11.slice - libcontainer container kubepods-burstable-podba5086be_6515_43a3_aac4_e336b6f0df11.slice. Feb 13 15:42:54.263937 systemd[1]: Created slice kubepods-besteffort-pod34518299_f628_42e5_806b_51d2fa3ce346.slice - libcontainer container kubepods-besteffort-pod34518299_f628_42e5_806b_51d2fa3ce346.slice. Feb 13 15:42:54.270041 systemd[1]: Created slice kubepods-besteffort-pod41ec69cc_3a06_44c4_8295_0752449d5e76.slice - libcontainer container kubepods-besteffort-pod41ec69cc_3a06_44c4_8295_0752449d5e76.slice. Feb 13 15:42:54.337840 kubelet[2620]: I0213 15:42:54.337802 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mdzl\" (UniqueName: \"kubernetes.io/projected/34518299-f628-42e5-806b-51d2fa3ce346-kube-api-access-7mdzl\") pod \"calico-apiserver-57b5c7b97f-ldpr5\" (UID: \"34518299-f628-42e5-806b-51d2fa3ce346\") " pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:54.337840 kubelet[2620]: I0213 15:42:54.337846 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5skw\" (UniqueName: \"kubernetes.io/projected/75e9cbc2-59c3-4f8f-a732-f7aed42e478e-kube-api-access-l5skw\") pod \"calico-kube-controllers-677779d7f9-hvmnx\" (UID: \"75e9cbc2-59c3-4f8f-a732-f7aed42e478e\") " pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:54.338022 kubelet[2620]: I0213 15:42:54.337930 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m886q\" (UniqueName: \"kubernetes.io/projected/5fc4b308-ecdc-4011-b8c8-b82b8a22d611-kube-api-access-m886q\") pod \"coredns-668d6bf9bc-7lp99\" (UID: \"5fc4b308-ecdc-4011-b8c8-b82b8a22d611\") " pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:54.338022 kubelet[2620]: I0213 15:42:54.337994 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75e9cbc2-59c3-4f8f-a732-f7aed42e478e-tigera-ca-bundle\") pod \"calico-kube-controllers-677779d7f9-hvmnx\" (UID: \"75e9cbc2-59c3-4f8f-a732-f7aed42e478e\") " pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:54.338022 kubelet[2620]: I0213 15:42:54.338015 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc4b308-ecdc-4011-b8c8-b82b8a22d611-config-volume\") pod \"coredns-668d6bf9bc-7lp99\" (UID: \"5fc4b308-ecdc-4011-b8c8-b82b8a22d611\") " pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:54.338101 kubelet[2620]: I0213 15:42:54.338035 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmn2m\" (UniqueName: \"kubernetes.io/projected/ba5086be-6515-43a3-aac4-e336b6f0df11-kube-api-access-mmn2m\") pod \"coredns-668d6bf9bc-qh42t\" (UID: \"ba5086be-6515-43a3-aac4-e336b6f0df11\") " pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:54.338127 kubelet[2620]: I0213 15:42:54.338102 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34518299-f628-42e5-806b-51d2fa3ce346-calico-apiserver-certs\") pod \"calico-apiserver-57b5c7b97f-ldpr5\" (UID: \"34518299-f628-42e5-806b-51d2fa3ce346\") " pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:54.338127 kubelet[2620]: I0213 15:42:54.338120 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t6m\" (UniqueName: \"kubernetes.io/projected/41ec69cc-3a06-44c4-8295-0752449d5e76-kube-api-access-56t6m\") pod \"calico-apiserver-57b5c7b97f-nvw2j\" (UID: \"41ec69cc-3a06-44c4-8295-0752449d5e76\") " pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:54.338180 kubelet[2620]: I0213 15:42:54.338167 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba5086be-6515-43a3-aac4-e336b6f0df11-config-volume\") pod \"coredns-668d6bf9bc-qh42t\" (UID: \"ba5086be-6515-43a3-aac4-e336b6f0df11\") " pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:54.338209 kubelet[2620]: I0213 15:42:54.338194 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41ec69cc-3a06-44c4-8295-0752449d5e76-calico-apiserver-certs\") pod \"calico-apiserver-57b5c7b97f-nvw2j\" (UID: \"41ec69cc-3a06-44c4-8295-0752449d5e76\") " pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:54.481869 systemd[1]: Created slice kubepods-besteffort-podfb72e478_27f9_4b96_ac63_312fc0de0c3b.slice - libcontainer container kubepods-besteffort-podfb72e478_27f9_4b96_ac63_312fc0de0c3b.slice. Feb 13 15:42:54.484867 containerd[1509]: time="2025-02-13T15:42:54.484824256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:0,}" Feb 13 15:42:54.550041 kubelet[2620]: E0213 15:42:54.549989 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.550647 containerd[1509]: time="2025-02-13T15:42:54.550472127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:54.558859 containerd[1509]: time="2025-02-13T15:42:54.558635588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:0,}" Feb 13 15:42:54.562757 kubelet[2620]: E0213 15:42:54.562734 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.563154 containerd[1509]: time="2025-02-13T15:42:54.563122103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:54.566739 containerd[1509]: time="2025-02-13T15:42:54.566706618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:42:54.568869 containerd[1509]: time="2025-02-13T15:42:54.568814027Z" level=error msg="Failed to destroy network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.569240 containerd[1509]: time="2025-02-13T15:42:54.569208172Z" level=error msg="encountered an error cleaning up failed sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.569307 containerd[1509]: time="2025-02-13T15:42:54.569285883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.569625 kubelet[2620]: E0213 15:42:54.569545 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.569625 kubelet[2620]: E0213 15:42:54.569632 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:54.569818 kubelet[2620]: E0213 15:42:54.569654 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:54.569818 kubelet[2620]: E0213 15:42:54.569696 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:54.573243 kubelet[2620]: E0213 15:42:54.573086 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.574036 containerd[1509]: time="2025-02-13T15:42:54.574004198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:42:54.574104 containerd[1509]: time="2025-02-13T15:42:54.574017845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:42:54.574206 kubelet[2620]: I0213 15:42:54.574196 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49" Feb 13 15:42:54.574742 containerd[1509]: time="2025-02-13T15:42:54.574705138Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:42:54.574917 containerd[1509]: time="2025-02-13T15:42:54.574898163Z" level=info msg="Ensure that sandbox 3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49 in task-service has been cleanup successfully" Feb 13 15:42:54.575089 containerd[1509]: time="2025-02-13T15:42:54.575071259Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:42:54.575089 containerd[1509]: time="2025-02-13T15:42:54.575086358Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:42:54.575449 containerd[1509]: time="2025-02-13T15:42:54.575428893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:1,}" Feb 13 15:42:54.957030 containerd[1509]: time="2025-02-13T15:42:54.956959427Z" level=error msg="Failed to destroy network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.957666 containerd[1509]: time="2025-02-13T15:42:54.957469266Z" level=error msg="encountered an error cleaning up failed sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.957666 containerd[1509]: time="2025-02-13T15:42:54.957541236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.957841 kubelet[2620]: E0213 15:42:54.957792 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.957894 kubelet[2620]: E0213 15:42:54.957877 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:54.957918 kubelet[2620]: E0213 15:42:54.957903 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:54.957986 kubelet[2620]: E0213 15:42:54.957954 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:42:54.960306 containerd[1509]: time="2025-02-13T15:42:54.960247386Z" level=error msg="Failed to destroy network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.960721 containerd[1509]: time="2025-02-13T15:42:54.960691719Z" level=error msg="encountered an error cleaning up failed sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.960788 containerd[1509]: time="2025-02-13T15:42:54.960758439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.961115 containerd[1509]: time="2025-02-13T15:42:54.960985539Z" level=error msg="Failed to destroy network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.961344 kubelet[2620]: E0213 15:42:54.961279 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.961393 kubelet[2620]: E0213 15:42:54.961364 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:54.961421 kubelet[2620]: E0213 15:42:54.961392 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:54.961459 kubelet[2620]: E0213 15:42:54.961429 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:42:54.961678 containerd[1509]: time="2025-02-13T15:42:54.961634148Z" level=error msg="encountered an error cleaning up failed sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.961939 containerd[1509]: time="2025-02-13T15:42:54.961799388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.962232 kubelet[2620]: E0213 15:42:54.962214 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.962274 kubelet[2620]: E0213 15:42:54.962243 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:54.962274 kubelet[2620]: E0213 15:42:54.962257 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:54.962358 kubelet[2620]: E0213 15:42:54.962283 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:42:54.963751 containerd[1509]: time="2025-02-13T15:42:54.963301543Z" level=error msg="Failed to destroy network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.963751 containerd[1509]: time="2025-02-13T15:42:54.963657314Z" level=error msg="encountered an error cleaning up failed sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.963751 containerd[1509]: time="2025-02-13T15:42:54.963702642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.963866 kubelet[2620]: E0213 15:42:54.963810 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.963866 kubelet[2620]: E0213 15:42:54.963839 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:54.963866 kubelet[2620]: E0213 15:42:54.963854 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:54.963950 kubelet[2620]: E0213 15:42:54.963884 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:42:54.964534 containerd[1509]: time="2025-02-13T15:42:54.964228091Z" level=error msg="Failed to destroy network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.964534 containerd[1509]: time="2025-02-13T15:42:54.964423510Z" level=error msg="Failed to destroy network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.964758 containerd[1509]: time="2025-02-13T15:42:54.964730335Z" level=error msg="encountered an error cleaning up failed sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.964823 containerd[1509]: time="2025-02-13T15:42:54.964799880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.964935 containerd[1509]: time="2025-02-13T15:42:54.964799339Z" level=error msg="encountered an error cleaning up failed sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.964935 containerd[1509]: time="2025-02-13T15:42:54.964929672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.965168 kubelet[2620]: E0213 15:42:54.964964 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.965168 kubelet[2620]: E0213 15:42:54.964989 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:54.965168 kubelet[2620]: E0213 15:42:54.965002 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:54.965266 kubelet[2620]: E0213 15:42:54.965037 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:42:54.965266 kubelet[2620]: E0213 15:42:54.965136 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:54.965266 kubelet[2620]: E0213 15:42:54.965167 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:54.965413 kubelet[2620]: E0213 15:42:54.965186 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:54.965413 kubelet[2620]: E0213 15:42:54.965234 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:55.576473 kubelet[2620]: I0213 15:42:55.576435 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878" Feb 13 15:42:55.576926 containerd[1509]: time="2025-02-13T15:42:55.576901564Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.577089218Z" level=info msg="Ensure that sandbox a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878 in task-service has been cleanup successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.577277353Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.577288926Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.577668352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.578642370Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.579077935Z" level=info msg="Ensure that sandbox 6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43 in task-service has been cleanup successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.579290758Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.579444256Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.579587633Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.579475486Z" level=info msg="Ensure that sandbox 950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b in task-service has been cleanup successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.580204921Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.580296528Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:42:55.580423 containerd[1509]: time="2025-02-13T15:42:55.580366484Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:42:55.580767 kubelet[2620]: I0213 15:42:55.578131 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43" Feb 13 15:42:55.580767 kubelet[2620]: I0213 15:42:55.578961 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b" Feb 13 15:42:55.580834 containerd[1509]: time="2025-02-13T15:42:55.580755708Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:42:55.580834 containerd[1509]: time="2025-02-13T15:42:55.580768063Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:42:55.580933 containerd[1509]: time="2025-02-13T15:42:55.580884078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:2,}" Feb 13 15:42:55.582135 kubelet[2620]: I0213 15:42:55.582070 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8" Feb 13 15:42:55.582634 containerd[1509]: time="2025-02-13T15:42:55.582493448Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:42:55.582673 containerd[1509]: time="2025-02-13T15:42:55.582662015Z" level=info msg="Ensure that sandbox 63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8 in task-service has been cleanup successfully" Feb 13 15:42:55.582896 containerd[1509]: time="2025-02-13T15:42:55.582872354Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:42:55.583005 containerd[1509]: time="2025-02-13T15:42:55.582906560Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:42:55.583005 containerd[1509]: time="2025-02-13T15:42:55.582949714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:42:55.583250 kubelet[2620]: E0213 15:42:55.583225 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:55.583257 systemd[1]: run-netns-cni\x2de2c4809b\x2d92b6\x2d74dc\x2d264a\x2d8cd79cbca075.mount: Deactivated successfully. Feb 13 15:42:55.583824 containerd[1509]: time="2025-02-13T15:42:55.583662816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:1,}" Feb 13 15:42:55.584050 kubelet[2620]: I0213 15:42:55.584010 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa" Feb 13 15:42:55.584308 systemd[1]: run-netns-cni\x2d37a40c96\x2dc538\x2dbdf4\x2dd8fa\x2dc813c2f2a8e8.mount: Deactivated successfully. Feb 13 15:42:55.584559 containerd[1509]: time="2025-02-13T15:42:55.584391038Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:42:55.584559 containerd[1509]: time="2025-02-13T15:42:55.584532994Z" level=info msg="Ensure that sandbox a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa in task-service has been cleanup successfully" Feb 13 15:42:55.584907 containerd[1509]: time="2025-02-13T15:42:55.584734645Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:42:55.584907 containerd[1509]: time="2025-02-13T15:42:55.584748111Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:42:55.585472 kubelet[2620]: I0213 15:42:55.585451 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964" Feb 13 15:42:55.585751 containerd[1509]: time="2025-02-13T15:42:55.585728403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:1,}" Feb 13 15:42:55.585957 containerd[1509]: time="2025-02-13T15:42:55.585941154Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:42:55.586117 containerd[1509]: time="2025-02-13T15:42:55.586095544Z" level=info msg="Ensure that sandbox b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964 in task-service has been cleanup successfully" Feb 13 15:42:55.587238 containerd[1509]: time="2025-02-13T15:42:55.586303388Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:42:55.587238 containerd[1509]: time="2025-02-13T15:42:55.586592768Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:42:55.587238 containerd[1509]: time="2025-02-13T15:42:55.587052540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:1,}" Feb 13 15:42:55.587371 kubelet[2620]: E0213 15:42:55.586737 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:55.588719 systemd[1]: run-netns-cni\x2d70276cfe\x2deca4\x2da5f0\x2d4c1b\x2da521cf08045e.mount: Deactivated successfully. Feb 13 15:42:55.588830 systemd[1]: run-netns-cni\x2d657bac96\x2d4931\x2d844c\x2dc2a1\x2d3afd3e72cd3a.mount: Deactivated successfully. Feb 13 15:42:55.588916 systemd[1]: run-netns-cni\x2d6ed19030\x2da668\x2de24d\x2d7280\x2d1300076b0e07.mount: Deactivated successfully. Feb 13 15:42:55.589002 systemd[1]: run-netns-cni\x2d902ea429\x2dc52c\x2da7d1\x2d46f4\x2dbe0e17ae44e7.mount: Deactivated successfully. Feb 13 15:42:55.698670 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:40172.service - OpenSSH per-connection server daemon (10.0.0.1:40172). Feb 13 15:42:55.744014 containerd[1509]: time="2025-02-13T15:42:55.743941624Z" level=error msg="Failed to destroy network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.744405 containerd[1509]: time="2025-02-13T15:42:55.744379012Z" level=error msg="encountered an error cleaning up failed sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.744503 containerd[1509]: time="2025-02-13T15:42:55.744442445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.745060 kubelet[2620]: E0213 15:42:55.744714 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.745060 kubelet[2620]: E0213 15:42:55.744785 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:55.745060 kubelet[2620]: E0213 15:42:55.744814 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:55.745167 kubelet[2620]: E0213 15:42:55.744858 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:55.749392 sshd[3864]: Accepted publickey for core from 10.0.0.1 port 40172 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:55.750547 containerd[1509]: time="2025-02-13T15:42:55.750100218Z" level=error msg="Failed to destroy network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.750547 containerd[1509]: time="2025-02-13T15:42:55.750525222Z" level=error msg="encountered an error cleaning up failed sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.750630 containerd[1509]: time="2025-02-13T15:42:55.750586511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.750832 kubelet[2620]: E0213 15:42:55.750795 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.750912 kubelet[2620]: E0213 15:42:55.750853 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:55.750912 kubelet[2620]: E0213 15:42:55.750875 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:55.751070 kubelet[2620]: E0213 15:42:55.750915 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:42:55.751261 sshd-session[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:55.753136 containerd[1509]: time="2025-02-13T15:42:55.752901530Z" level=error msg="Failed to destroy network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.753506 containerd[1509]: time="2025-02-13T15:42:55.753471185Z" level=error msg="encountered an error cleaning up failed sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.753795 containerd[1509]: time="2025-02-13T15:42:55.753541160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.753938 kubelet[2620]: E0213 15:42:55.753817 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.753938 kubelet[2620]: E0213 15:42:55.753892 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:55.753938 kubelet[2620]: E0213 15:42:55.753916 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:55.754132 kubelet[2620]: E0213 15:42:55.753956 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:42:55.758810 systemd-logind[1495]: New session 10 of user core. Feb 13 15:42:55.764489 containerd[1509]: time="2025-02-13T15:42:55.764428765Z" level=error msg="Failed to destroy network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.765127 containerd[1509]: time="2025-02-13T15:42:55.765074827Z" level=error msg="encountered an error cleaning up failed sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.765238 containerd[1509]: time="2025-02-13T15:42:55.765219759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.765640 kubelet[2620]: E0213 15:42:55.765603 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.765795 kubelet[2620]: E0213 15:42:55.765749 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:55.765795 kubelet[2620]: E0213 15:42:55.765786 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:55.765969 kubelet[2620]: E0213 15:42:55.765835 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:42:55.768572 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:42:55.778545 containerd[1509]: time="2025-02-13T15:42:55.778494262Z" level=error msg="Failed to destroy network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.778967 containerd[1509]: time="2025-02-13T15:42:55.778930338Z" level=error msg="encountered an error cleaning up failed sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.779029 containerd[1509]: time="2025-02-13T15:42:55.779003350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.779297 kubelet[2620]: E0213 15:42:55.779257 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.779801 kubelet[2620]: E0213 15:42:55.779449 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:55.779801 kubelet[2620]: E0213 15:42:55.779482 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:55.779801 kubelet[2620]: E0213 15:42:55.779533 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:42:55.780599 containerd[1509]: time="2025-02-13T15:42:55.780386532Z" level=error msg="Failed to destroy network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.781683 containerd[1509]: time="2025-02-13T15:42:55.781624121Z" level=error msg="encountered an error cleaning up failed sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.781753 containerd[1509]: time="2025-02-13T15:42:55.781716891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.781959 kubelet[2620]: E0213 15:42:55.781918 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:55.782025 kubelet[2620]: E0213 15:42:55.781964 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:55.782025 kubelet[2620]: E0213 15:42:55.781983 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:55.782102 kubelet[2620]: E0213 15:42:55.782018 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:42:55.886113 sshd[3977]: Connection closed by 10.0.0.1 port 40172 Feb 13 15:42:55.886492 sshd-session[3864]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:55.889810 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:40172.service: Deactivated successfully. Feb 13 15:42:55.892093 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:42:55.893989 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:42:55.895045 systemd-logind[1495]: Removed session 10. Feb 13 15:42:56.180487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a-shm.mount: Deactivated successfully. Feb 13 15:42:56.589075 kubelet[2620]: I0213 15:42:56.588943 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4" Feb 13 15:42:56.589559 containerd[1509]: time="2025-02-13T15:42:56.589433194Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:42:56.590248 containerd[1509]: time="2025-02-13T15:42:56.589710902Z" level=info msg="Ensure that sandbox feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4 in task-service has been cleanup successfully" Feb 13 15:42:56.592507 containerd[1509]: time="2025-02-13T15:42:56.592468377Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:42:56.592752 containerd[1509]: time="2025-02-13T15:42:56.592597037Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:42:56.592899 systemd[1]: run-netns-cni\x2d5f5e50f0\x2da57a\x2dbcda\x2d926e\x2d0528fc9f57f7.mount: Deactivated successfully. Feb 13 15:42:56.593472 containerd[1509]: time="2025-02-13T15:42:56.593309577Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:42:56.593472 containerd[1509]: time="2025-02-13T15:42:56.593430723Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:42:56.593472 containerd[1509]: time="2025-02-13T15:42:56.593440771Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:42:56.593656 kubelet[2620]: E0213 15:42:56.593633 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:56.593805 kubelet[2620]: I0213 15:42:56.593639 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a" Feb 13 15:42:56.594426 containerd[1509]: time="2025-02-13T15:42:56.594400873Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:42:56.594809 containerd[1509]: time="2025-02-13T15:42:56.594464766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:2,}" Feb 13 15:42:56.594809 containerd[1509]: time="2025-02-13T15:42:56.594676517Z" level=info msg="Ensure that sandbox bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a in task-service has been cleanup successfully" Feb 13 15:42:56.594971 containerd[1509]: time="2025-02-13T15:42:56.594944166Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:42:56.594971 containerd[1509]: time="2025-02-13T15:42:56.594961560Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:42:56.595650 containerd[1509]: time="2025-02-13T15:42:56.595630806Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:42:56.595722 containerd[1509]: time="2025-02-13T15:42:56.595704730Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:42:56.595722 containerd[1509]: time="2025-02-13T15:42:56.595716924Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:42:56.596009 containerd[1509]: time="2025-02-13T15:42:56.595989592Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:42:56.596074 containerd[1509]: time="2025-02-13T15:42:56.596059487Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:42:56.596074 containerd[1509]: time="2025-02-13T15:42:56.596071691Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:42:56.596354 kubelet[2620]: I0213 15:42:56.596337 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d" Feb 13 15:42:56.596935 containerd[1509]: time="2025-02-13T15:42:56.596690340Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:42:56.596935 containerd[1509]: time="2025-02-13T15:42:56.596726421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:3,}" Feb 13 15:42:56.596935 containerd[1509]: time="2025-02-13T15:42:56.596822096Z" level=info msg="Ensure that sandbox 51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d in task-service has been cleanup successfully" Feb 13 15:42:56.597098 containerd[1509]: time="2025-02-13T15:42:56.597081488Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:42:56.597157 containerd[1509]: time="2025-02-13T15:42:56.597144821Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:42:56.597206 systemd[1]: run-netns-cni\x2d085a63da\x2d54bd\x2db3bb\x2df521\x2dcb2fcd8c0fcd.mount: Deactivated successfully. Feb 13 15:42:56.597519 containerd[1509]: time="2025-02-13T15:42:56.597490902Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:42:56.597601 containerd[1509]: time="2025-02-13T15:42:56.597581448Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:42:56.597601 containerd[1509]: time="2025-02-13T15:42:56.597595074Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:42:56.597980 containerd[1509]: time="2025-02-13T15:42:56.597962386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:42:56.598722 kubelet[2620]: I0213 15:42:56.598215 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b" Feb 13 15:42:56.598893 containerd[1509]: time="2025-02-13T15:42:56.598868632Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:42:56.599060 containerd[1509]: time="2025-02-13T15:42:56.599039824Z" level=info msg="Ensure that sandbox e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b in task-service has been cleanup successfully" Feb 13 15:42:56.599627 containerd[1509]: time="2025-02-13T15:42:56.599596603Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:42:56.599779 containerd[1509]: time="2025-02-13T15:42:56.599764509Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:42:56.600072 containerd[1509]: time="2025-02-13T15:42:56.600051004Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:42:56.600297 containerd[1509]: time="2025-02-13T15:42:56.600281822Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:42:56.600481 containerd[1509]: time="2025-02-13T15:42:56.600395191Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:42:56.600522 kubelet[2620]: I0213 15:42:56.600454 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f" Feb 13 15:42:56.600726 systemd[1]: run-netns-cni\x2d79b9aea3\x2d0368\x2dd96d\x2dbb79\x2da5aeb5b41c66.mount: Deactivated successfully. Feb 13 15:42:56.601203 containerd[1509]: time="2025-02-13T15:42:56.601166717Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:42:56.601280 containerd[1509]: time="2025-02-13T15:42:56.601238496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:2,}" Feb 13 15:42:56.601501 containerd[1509]: time="2025-02-13T15:42:56.601449575Z" level=info msg="Ensure that sandbox 599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f in task-service has been cleanup successfully" Feb 13 15:42:56.602054 containerd[1509]: time="2025-02-13T15:42:56.601793812Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:42:56.602054 containerd[1509]: time="2025-02-13T15:42:56.601849951Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:42:56.602144 containerd[1509]: time="2025-02-13T15:42:56.602077402Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:42:56.602175 containerd[1509]: time="2025-02-13T15:42:56.602156505Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:42:56.602175 containerd[1509]: time="2025-02-13T15:42:56.602165893Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:42:56.602955 kubelet[2620]: E0213 15:42:56.602457 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:56.602955 kubelet[2620]: I0213 15:42:56.602681 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca" Feb 13 15:42:56.603063 containerd[1509]: time="2025-02-13T15:42:56.602706361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:2,}" Feb 13 15:42:56.603063 containerd[1509]: time="2025-02-13T15:42:56.603000160Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:42:56.603206 containerd[1509]: time="2025-02-13T15:42:56.603179909Z" level=info msg="Ensure that sandbox 546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca in task-service has been cleanup successfully" Feb 13 15:42:56.603555 containerd[1509]: time="2025-02-13T15:42:56.603454932Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:42:56.603555 containerd[1509]: time="2025-02-13T15:42:56.603481544Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:42:56.603739 containerd[1509]: time="2025-02-13T15:42:56.603715537Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:42:56.603812 containerd[1509]: time="2025-02-13T15:42:56.603794470Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:42:56.603812 containerd[1509]: time="2025-02-13T15:42:56.603807826Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:42:56.604203 containerd[1509]: time="2025-02-13T15:42:56.604180097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:42:56.605818 systemd[1]: run-netns-cni\x2de14a039d\x2df4bd\x2d9c05\x2d72e7\x2dcd41cd41669c.mount: Deactivated successfully. Feb 13 15:42:56.605920 systemd[1]: run-netns-cni\x2ddc59e479\x2da50c\x2deea6\x2dd4e8\x2d77b5da165569.mount: Deactivated successfully. Feb 13 15:42:56.605999 systemd[1]: run-netns-cni\x2da01b899f\x2df22f\x2d14a7\x2db788\x2d96fc4eab0081.mount: Deactivated successfully. Feb 13 15:42:56.791552 containerd[1509]: time="2025-02-13T15:42:56.791480458Z" level=error msg="Failed to destroy network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.792145 containerd[1509]: time="2025-02-13T15:42:56.792110058Z" level=error msg="encountered an error cleaning up failed sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.792242 containerd[1509]: time="2025-02-13T15:42:56.792208129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.792545 kubelet[2620]: E0213 15:42:56.792500 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.792599 kubelet[2620]: E0213 15:42:56.792567 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:56.792599 kubelet[2620]: E0213 15:42:56.792589 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:56.792679 kubelet[2620]: E0213 15:42:56.792647 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:42:56.795405 containerd[1509]: time="2025-02-13T15:42:56.795271005Z" level=error msg="Failed to destroy network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.795873 containerd[1509]: time="2025-02-13T15:42:56.795692903Z" level=error msg="encountered an error cleaning up failed sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.795873 containerd[1509]: time="2025-02-13T15:42:56.795764702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.796136 kubelet[2620]: E0213 15:42:56.796070 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.796214 kubelet[2620]: E0213 15:42:56.796182 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:56.796269 kubelet[2620]: E0213 15:42:56.796241 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:56.796421 kubelet[2620]: E0213 15:42:56.796386 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:56.807224 containerd[1509]: time="2025-02-13T15:42:56.807169307Z" level=error msg="Failed to destroy network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.807603 containerd[1509]: time="2025-02-13T15:42:56.807568591Z" level=error msg="encountered an error cleaning up failed sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.807660 containerd[1509]: time="2025-02-13T15:42:56.807636683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.807913 kubelet[2620]: E0213 15:42:56.807870 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.808025 kubelet[2620]: E0213 15:42:56.807940 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:56.808025 kubelet[2620]: E0213 15:42:56.807962 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:56.808025 kubelet[2620]: E0213 15:42:56.808018 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:42:56.808462 containerd[1509]: time="2025-02-13T15:42:56.808406105Z" level=error msg="Failed to destroy network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.808914 containerd[1509]: time="2025-02-13T15:42:56.808878961Z" level=error msg="encountered an error cleaning up failed sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.808968 containerd[1509]: time="2025-02-13T15:42:56.808941823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.809200 kubelet[2620]: E0213 15:42:56.809161 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.809251 kubelet[2620]: E0213 15:42:56.809223 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:56.809276 kubelet[2620]: E0213 15:42:56.809249 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:56.809344 kubelet[2620]: E0213 15:42:56.809301 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:42:56.859129 containerd[1509]: time="2025-02-13T15:42:56.858940213Z" level=error msg="Failed to destroy network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.859383 containerd[1509]: time="2025-02-13T15:42:56.859355809Z" level=error msg="encountered an error cleaning up failed sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.859450 containerd[1509]: time="2025-02-13T15:42:56.859416878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.859865 kubelet[2620]: E0213 15:42:56.859745 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.860169 kubelet[2620]: E0213 15:42:56.859925 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:56.860169 kubelet[2620]: E0213 15:42:56.859947 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:56.860169 kubelet[2620]: E0213 15:42:56.859995 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:42:56.921432 containerd[1509]: time="2025-02-13T15:42:56.921370521Z" level=error msg="Failed to destroy network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.921829 containerd[1509]: time="2025-02-13T15:42:56.921789663Z" level=error msg="encountered an error cleaning up failed sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.921994 containerd[1509]: time="2025-02-13T15:42:56.921847827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.922120 kubelet[2620]: E0213 15:42:56.922073 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:56.922177 kubelet[2620]: E0213 15:42:56.922146 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:56.922213 kubelet[2620]: E0213 15:42:56.922182 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:56.922279 kubelet[2620]: E0213 15:42:56.922245 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:42:57.611230 kubelet[2620]: I0213 15:42:57.611170 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42" Feb 13 15:42:57.611836 containerd[1509]: time="2025-02-13T15:42:57.611805975Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:42:57.612079 containerd[1509]: time="2025-02-13T15:42:57.612009920Z" level=info msg="Ensure that sandbox 62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42 in task-service has been cleanup successfully" Feb 13 15:42:57.612504 containerd[1509]: time="2025-02-13T15:42:57.612204918Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:42:57.612504 containerd[1509]: time="2025-02-13T15:42:57.612299551Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:42:57.612870 containerd[1509]: time="2025-02-13T15:42:57.612852413Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:42:57.612999 containerd[1509]: time="2025-02-13T15:42:57.612984979Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:42:57.613467 containerd[1509]: time="2025-02-13T15:42:57.613407267Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:42:57.613815 containerd[1509]: time="2025-02-13T15:42:57.613780862Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:42:57.613930 containerd[1509]: time="2025-02-13T15:42:57.613889702Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:42:57.614036 containerd[1509]: time="2025-02-13T15:42:57.613928067Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:42:57.614615 containerd[1509]: time="2025-02-13T15:42:57.614560231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:42:57.615493 systemd[1]: run-netns-cni\x2d833c3244\x2d6ddd\x2d7dbf\x2d4304\x2d4e5772df7e84.mount: Deactivated successfully. Feb 13 15:42:57.616124 kubelet[2620]: I0213 15:42:57.616082 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145" Feb 13 15:42:57.617488 containerd[1509]: time="2025-02-13T15:42:57.616876499Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:42:57.617488 containerd[1509]: time="2025-02-13T15:42:57.617202109Z" level=info msg="Ensure that sandbox 1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145 in task-service has been cleanup successfully" Feb 13 15:42:57.617633 containerd[1509]: time="2025-02-13T15:42:57.617616742Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:42:57.617706 containerd[1509]: time="2025-02-13T15:42:57.617680566Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:42:57.618659 containerd[1509]: time="2025-02-13T15:42:57.618625026Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:42:57.618924 containerd[1509]: time="2025-02-13T15:42:57.618899808Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:42:57.618986 containerd[1509]: time="2025-02-13T15:42:57.618972830Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:42:57.619493 systemd[1]: run-netns-cni\x2dbcb0fd2f\x2dfaef\x2d2476\x2db146\x2d3eea8cf2bdbc.mount: Deactivated successfully. Feb 13 15:42:57.619604 containerd[1509]: time="2025-02-13T15:42:57.619517746Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:42:57.619604 containerd[1509]: time="2025-02-13T15:42:57.619587781Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:42:57.619604 containerd[1509]: time="2025-02-13T15:42:57.619597260Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:42:57.620032 kubelet[2620]: E0213 15:42:57.619878 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:57.620032 kubelet[2620]: I0213 15:42:57.619895 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c" Feb 13 15:42:57.620434 containerd[1509]: time="2025-02-13T15:42:57.620396648Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:42:57.621346 containerd[1509]: time="2025-02-13T15:42:57.620659547Z" level=info msg="Ensure that sandbox 054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c in task-service has been cleanup successfully" Feb 13 15:42:57.621346 containerd[1509]: time="2025-02-13T15:42:57.620662623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:3,}" Feb 13 15:42:57.621346 containerd[1509]: time="2025-02-13T15:42:57.620860908Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:42:57.621346 containerd[1509]: time="2025-02-13T15:42:57.620874153Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:42:57.623214 systemd[1]: run-netns-cni\x2de8a1dd87\x2def34\x2da499\x2d4170\x2d6e44d0663294.mount: Deactivated successfully. Feb 13 15:42:57.623815 containerd[1509]: time="2025-02-13T15:42:57.623787627Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:42:57.623990 containerd[1509]: time="2025-02-13T15:42:57.623942878Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:42:57.624204 containerd[1509]: time="2025-02-13T15:42:57.624040217Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:42:57.624566 containerd[1509]: time="2025-02-13T15:42:57.624539063Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:42:57.624706 containerd[1509]: time="2025-02-13T15:42:57.624620150Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:42:57.624706 containerd[1509]: time="2025-02-13T15:42:57.624635881Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:42:57.625099 containerd[1509]: time="2025-02-13T15:42:57.624889061Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:42:57.625099 containerd[1509]: time="2025-02-13T15:42:57.625043130Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:42:57.625099 containerd[1509]: time="2025-02-13T15:42:57.625054762Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:42:57.625214 kubelet[2620]: I0213 15:42:57.625195 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8" Feb 13 15:42:57.625841 containerd[1509]: time="2025-02-13T15:42:57.625558258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:4,}" Feb 13 15:42:57.625841 containerd[1509]: time="2025-02-13T15:42:57.625594047Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:42:57.625841 containerd[1509]: time="2025-02-13T15:42:57.625771210Z" level=info msg="Ensure that sandbox b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8 in task-service has been cleanup successfully" Feb 13 15:42:57.625972 containerd[1509]: time="2025-02-13T15:42:57.625954826Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:42:57.626049 containerd[1509]: time="2025-02-13T15:42:57.626035042Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:42:57.626476 containerd[1509]: time="2025-02-13T15:42:57.626456318Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:42:57.626554 containerd[1509]: time="2025-02-13T15:42:57.626541232Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:42:57.626577 containerd[1509]: time="2025-02-13T15:42:57.626553366Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:42:57.626961 containerd[1509]: time="2025-02-13T15:42:57.626941597Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:42:57.627081 containerd[1509]: time="2025-02-13T15:42:57.627028176Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:42:57.627081 containerd[1509]: time="2025-02-13T15:42:57.627071569Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:42:57.627298 kubelet[2620]: I0213 15:42:57.627269 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048" Feb 13 15:42:57.627791 systemd[1]: run-netns-cni\x2d7ccde2d8\x2d63bd\x2d8935\x2d25e2\x2d9a4b93514b8b.mount: Deactivated successfully. Feb 13 15:42:57.627932 containerd[1509]: time="2025-02-13T15:42:57.627911757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:3,}" Feb 13 15:42:57.628155 containerd[1509]: time="2025-02-13T15:42:57.628130341Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:42:57.628300 containerd[1509]: time="2025-02-13T15:42:57.628284369Z" level=info msg="Ensure that sandbox d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048 in task-service has been cleanup successfully" Feb 13 15:42:57.628520 containerd[1509]: time="2025-02-13T15:42:57.628479427Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:42:57.628520 containerd[1509]: time="2025-02-13T15:42:57.628497452Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:42:57.629530 containerd[1509]: time="2025-02-13T15:42:57.629500635Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:42:57.629605 containerd[1509]: time="2025-02-13T15:42:57.629596221Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:42:57.629633 containerd[1509]: time="2025-02-13T15:42:57.629607743Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:42:57.629961 containerd[1509]: time="2025-02-13T15:42:57.629816948Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:42:57.629961 containerd[1509]: time="2025-02-13T15:42:57.629905910Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:42:57.629961 containerd[1509]: time="2025-02-13T15:42:57.629916671Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:42:57.630059 kubelet[2620]: E0213 15:42:57.630047 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:57.630254 containerd[1509]: time="2025-02-13T15:42:57.630235349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:3,}" Feb 13 15:42:57.630288 kubelet[2620]: I0213 15:42:57.630251 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09" Feb 13 15:42:57.630940 containerd[1509]: time="2025-02-13T15:42:57.630663729Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:42:57.630940 containerd[1509]: time="2025-02-13T15:42:57.630827065Z" level=info msg="Ensure that sandbox 644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09 in task-service has been cleanup successfully" Feb 13 15:42:57.631074 containerd[1509]: time="2025-02-13T15:42:57.631039707Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:42:57.631074 containerd[1509]: time="2025-02-13T15:42:57.631056760Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:42:57.633868 containerd[1509]: time="2025-02-13T15:42:57.633812118Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:42:57.634047 containerd[1509]: time="2025-02-13T15:42:57.634021063Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:42:57.634047 containerd[1509]: time="2025-02-13T15:42:57.634041964Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:42:57.634344 containerd[1509]: time="2025-02-13T15:42:57.634301637Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:42:57.634431 containerd[1509]: time="2025-02-13T15:42:57.634394186Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:42:57.634431 containerd[1509]: time="2025-02-13T15:42:57.634408544Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:42:57.634821 containerd[1509]: time="2025-02-13T15:42:57.634791766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:42:58.018723 containerd[1509]: time="2025-02-13T15:42:58.018665248Z" level=error msg="Failed to destroy network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.019151 containerd[1509]: time="2025-02-13T15:42:58.019124728Z" level=error msg="encountered an error cleaning up failed sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.019244 containerd[1509]: time="2025-02-13T15:42:58.019218409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.019994 kubelet[2620]: E0213 15:42:58.019795 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.019994 kubelet[2620]: E0213 15:42:58.019869 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:58.019994 kubelet[2620]: E0213 15:42:58.019894 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:42:58.020216 kubelet[2620]: E0213 15:42:58.019946 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:42:58.062781 containerd[1509]: time="2025-02-13T15:42:58.062631917Z" level=error msg="Failed to destroy network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.063221 containerd[1509]: time="2025-02-13T15:42:58.063194837Z" level=error msg="encountered an error cleaning up failed sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.063369 containerd[1509]: time="2025-02-13T15:42:58.063347813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.064100 kubelet[2620]: E0213 15:42:58.063675 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.064100 kubelet[2620]: E0213 15:42:58.063741 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:58.064100 kubelet[2620]: E0213 15:42:58.063764 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:42:58.064272 kubelet[2620]: E0213 15:42:58.063805 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:42:58.087196 containerd[1509]: time="2025-02-13T15:42:58.087128353Z" level=error msg="Failed to destroy network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.087820 containerd[1509]: time="2025-02-13T15:42:58.087714698Z" level=error msg="encountered an error cleaning up failed sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.087907 containerd[1509]: time="2025-02-13T15:42:58.087873887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.088258 kubelet[2620]: E0213 15:42:58.088199 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.089391 kubelet[2620]: E0213 15:42:58.088482 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:58.089391 kubelet[2620]: E0213 15:42:58.088510 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:42:58.089391 kubelet[2620]: E0213 15:42:58.088566 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:42:58.105095 containerd[1509]: time="2025-02-13T15:42:58.105037833Z" level=error msg="Failed to destroy network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.105944 containerd[1509]: time="2025-02-13T15:42:58.105840688Z" level=error msg="encountered an error cleaning up failed sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.105944 containerd[1509]: time="2025-02-13T15:42:58.105919119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.106297 kubelet[2620]: E0213 15:42:58.106236 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.106552 kubelet[2620]: E0213 15:42:58.106473 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:58.106552 kubelet[2620]: E0213 15:42:58.106511 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:42:58.107004 kubelet[2620]: E0213 15:42:58.106966 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:42:58.113805 containerd[1509]: time="2025-02-13T15:42:58.113738532Z" level=error msg="Failed to destroy network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.114258 containerd[1509]: time="2025-02-13T15:42:58.114229883Z" level=error msg="encountered an error cleaning up failed sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.114364 containerd[1509]: time="2025-02-13T15:42:58.114305319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.114584 kubelet[2620]: E0213 15:42:58.114533 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.114686 kubelet[2620]: E0213 15:42:58.114610 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:58.114686 kubelet[2620]: E0213 15:42:58.114639 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:42:58.114804 kubelet[2620]: E0213 15:42:58.114697 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:42:58.133225 containerd[1509]: time="2025-02-13T15:42:58.133169990Z" level=error msg="Failed to destroy network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.133780 containerd[1509]: time="2025-02-13T15:42:58.133744572Z" level=error msg="encountered an error cleaning up failed sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.133827 containerd[1509]: time="2025-02-13T15:42:58.133803997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.134053 kubelet[2620]: E0213 15:42:58.134006 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:42:58.134126 kubelet[2620]: E0213 15:42:58.134077 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:58.134126 kubelet[2620]: E0213 15:42:58.134096 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:42:58.134189 kubelet[2620]: E0213 15:42:58.134144 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:42:58.183375 systemd[1]: run-netns-cni\x2dcf01f84b\x2d073c\x2df885\x2dd1d4\x2df16f7d39c793.mount: Deactivated successfully. Feb 13 15:42:58.183502 systemd[1]: run-netns-cni\x2dfc53baf2\x2d0a0d\x2d34a7\x2dedf9\x2d79c06f2fd08c.mount: Deactivated successfully. Feb 13 15:42:58.636023 kubelet[2620]: I0213 15:42:58.635989 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d" Feb 13 15:42:58.638353 kubelet[2620]: I0213 15:42:58.638291 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8" Feb 13 15:42:58.638955 containerd[1509]: time="2025-02-13T15:42:58.638926042Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:42:58.639675 containerd[1509]: time="2025-02-13T15:42:58.639642198Z" level=info msg="Ensure that sandbox acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8 in task-service has been cleanup successfully" Feb 13 15:42:58.639930 containerd[1509]: time="2025-02-13T15:42:58.639852185Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:42:58.639930 containerd[1509]: time="2025-02-13T15:42:58.639869428Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:42:58.642522 systemd[1]: run-netns-cni\x2d40739670\x2dec71\x2db9b4\x2def75\x2d7270864a11f8.mount: Deactivated successfully. Feb 13 15:42:58.643518 containerd[1509]: time="2025-02-13T15:42:58.643490250Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:42:58.643585 containerd[1509]: time="2025-02-13T15:42:58.643573160Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:42:58.643621 containerd[1509]: time="2025-02-13T15:42:58.643583962Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:42:58.644304 containerd[1509]: time="2025-02-13T15:42:58.644175657Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:42:58.644304 containerd[1509]: time="2025-02-13T15:42:58.644254540Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:42:58.644304 containerd[1509]: time="2025-02-13T15:42:58.644264930Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:42:58.644507 containerd[1509]: time="2025-02-13T15:42:58.644483543Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:42:58.644575 containerd[1509]: time="2025-02-13T15:42:58.644558048Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:42:58.644575 containerd[1509]: time="2025-02-13T15:42:58.644570302Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:42:58.644809 containerd[1509]: time="2025-02-13T15:42:58.644781781Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:42:58.644866 containerd[1509]: time="2025-02-13T15:42:58.644857387Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:42:58.644989 containerd[1509]: time="2025-02-13T15:42:58.644866966Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:42:58.645084 kubelet[2620]: I0213 15:42:58.645036 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c" Feb 13 15:42:58.645568 containerd[1509]: time="2025-02-13T15:42:58.645544819Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:42:58.645720 containerd[1509]: time="2025-02-13T15:42:58.645696733Z" level=info msg="Ensure that sandbox 9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c in task-service has been cleanup successfully" Feb 13 15:42:58.645798 containerd[1509]: time="2025-02-13T15:42:58.645721612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:5,}" Feb 13 15:42:58.646382 containerd[1509]: time="2025-02-13T15:42:58.646354617Z" level=info msg="TearDown network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" successfully" Feb 13 15:42:58.646382 containerd[1509]: time="2025-02-13T15:42:58.646375978Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" returns successfully" Feb 13 15:42:58.646664 containerd[1509]: time="2025-02-13T15:42:58.646638386Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:42:58.646767 containerd[1509]: time="2025-02-13T15:42:58.646749071Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:42:58.646812 containerd[1509]: time="2025-02-13T15:42:58.646764851Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:42:58.647117 kubelet[2620]: I0213 15:42:58.647071 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555" Feb 13 15:42:58.648805 systemd[1]: run-netns-cni\x2d39ac3d18\x2dbd4f\x2d7a83\x2d1046\x2d56c39284e78c.mount: Deactivated successfully. Feb 13 15:42:58.649579 containerd[1509]: time="2025-02-13T15:42:58.649061799Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:42:58.649579 containerd[1509]: time="2025-02-13T15:42:58.649160210Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:42:58.649579 containerd[1509]: time="2025-02-13T15:42:58.649171842Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:42:58.649579 containerd[1509]: time="2025-02-13T15:42:58.649075546Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:42:58.649579 containerd[1509]: time="2025-02-13T15:42:58.649433369Z" level=info msg="Ensure that sandbox 90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555 in task-service has been cleanup successfully" Feb 13 15:42:58.649802 containerd[1509]: time="2025-02-13T15:42:58.649785230Z" level=info msg="TearDown network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" successfully" Feb 13 15:42:58.649877 containerd[1509]: time="2025-02-13T15:42:58.649864474Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" returns successfully" Feb 13 15:42:58.649963 containerd[1509]: time="2025-02-13T15:42:58.649824286Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:42:58.650116 containerd[1509]: time="2025-02-13T15:42:58.650080282Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:42:58.650192 containerd[1509]: time="2025-02-13T15:42:58.650160808Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:42:58.650818 containerd[1509]: time="2025-02-13T15:42:58.650767883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:42:58.651362 systemd[1]: run-netns-cni\x2d439beaf3\x2dafdb\x2d2c4c\x2d6dc3\x2dc606ed9c599f.mount: Deactivated successfully. Feb 13 15:42:58.651728 containerd[1509]: time="2025-02-13T15:42:58.651691501Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:42:58.652285 kubelet[2620]: I0213 15:42:58.652261 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186" Feb 13 15:42:58.652720 containerd[1509]: time="2025-02-13T15:42:58.652696167Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:42:58.652771 containerd[1509]: time="2025-02-13T15:42:58.652718460Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:42:58.652850 containerd[1509]: time="2025-02-13T15:42:58.652829014Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:42:58.653068 containerd[1509]: time="2025-02-13T15:42:58.653033641Z" level=info msg="Ensure that sandbox 6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186 in task-service has been cleanup successfully" Feb 13 15:42:58.653565 containerd[1509]: time="2025-02-13T15:42:58.653523720Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:42:58.653565 containerd[1509]: time="2025-02-13T15:42:58.653551192Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:42:58.654004 containerd[1509]: time="2025-02-13T15:42:58.653975194Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:42:58.654096 containerd[1509]: time="2025-02-13T15:42:58.654074546Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:42:58.654128 containerd[1509]: time="2025-02-13T15:42:58.654092701Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:42:58.654172 containerd[1509]: time="2025-02-13T15:42:58.654156946Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654230748Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654244926Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654737881Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654757108Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654823376Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654833736Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654846350Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.654858424Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.655179296Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.655249371Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.655258790Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.655777875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:4,}" Feb 13 15:42:58.656234 containerd[1509]: time="2025-02-13T15:42:58.655808644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:4,}" Feb 13 15:42:58.656743 kubelet[2620]: E0213 15:42:58.655398 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:58.656743 kubelet[2620]: I0213 15:42:58.656310 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17" Feb 13 15:42:58.656705 systemd[1]: run-netns-cni\x2db134391a\x2d533c\x2db357\x2daa55\x2d2579e1369865.mount: Deactivated successfully. Feb 13 15:42:58.656914 containerd[1509]: time="2025-02-13T15:42:58.656723586Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:42:58.656951 containerd[1509]: time="2025-02-13T15:42:58.656932801Z" level=info msg="Ensure that sandbox b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17 in task-service has been cleanup successfully" Feb 13 15:42:58.657186 containerd[1509]: time="2025-02-13T15:42:58.657142187Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:42:58.657186 containerd[1509]: time="2025-02-13T15:42:58.657158789Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:42:58.657535 containerd[1509]: time="2025-02-13T15:42:58.657506342Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:42:58.657634 containerd[1509]: time="2025-02-13T15:42:58.657596887Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:42:58.657634 containerd[1509]: time="2025-02-13T15:42:58.657612366Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:42:58.657871 containerd[1509]: time="2025-02-13T15:42:58.657845949Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:42:58.657963 containerd[1509]: time="2025-02-13T15:42:58.657945312Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:42:58.657988 containerd[1509]: time="2025-02-13T15:42:58.657961303Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:42:58.658484 containerd[1509]: time="2025-02-13T15:42:58.658275380Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:42:58.658484 containerd[1509]: time="2025-02-13T15:42:58.658411404Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:42:58.658484 containerd[1509]: time="2025-02-13T15:42:58.658427025Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:42:58.658729 kubelet[2620]: E0213 15:42:58.658707 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:58.658959 containerd[1509]: time="2025-02-13T15:42:58.658929867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:4,}" Feb 13 15:42:58.664794 containerd[1509]: time="2025-02-13T15:42:58.664733637Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:42:58.664794 containerd[1509]: time="2025-02-13T15:42:58.664901942Z" level=info msg="Ensure that sandbox 287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d in task-service has been cleanup successfully" Feb 13 15:42:58.665148 containerd[1509]: time="2025-02-13T15:42:58.665087030Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:42:58.665148 containerd[1509]: time="2025-02-13T15:42:58.665099675Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:42:58.665406 containerd[1509]: time="2025-02-13T15:42:58.665369909Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:42:58.665453 containerd[1509]: time="2025-02-13T15:42:58.665445776Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:42:58.665484 containerd[1509]: time="2025-02-13T15:42:58.665456226Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:42:58.665748 containerd[1509]: time="2025-02-13T15:42:58.665723163Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:42:58.665832 containerd[1509]: time="2025-02-13T15:42:58.665816503Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:42:58.665854 containerd[1509]: time="2025-02-13T15:42:58.665830430Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:42:58.666119 containerd[1509]: time="2025-02-13T15:42:58.666071367Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:42:58.666190 containerd[1509]: time="2025-02-13T15:42:58.666173514Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:42:58.666190 containerd[1509]: time="2025-02-13T15:42:58.666187131Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:42:58.666532 containerd[1509]: time="2025-02-13T15:42:58.666510477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:42:59.179271 systemd[1]: run-netns-cni\x2d23a18a84\x2dad42\x2daed7\x2d17c0\x2d387f7cc4b694.mount: Deactivated successfully. Feb 13 15:42:59.179428 systemd[1]: run-netns-cni\x2d336d20a0\x2d71dc\x2d595b\x2d10d5\x2d61d6ebc9dd47.mount: Deactivated successfully. Feb 13 15:43:00.906620 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:59250.service - OpenSSH per-connection server daemon (10.0.0.1:59250). Feb 13 15:43:01.138496 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 59250 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:01.140242 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:01.144516 systemd-logind[1495]: New session 11 of user core. Feb 13 15:43:01.155438 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:43:01.298346 sshd[4431]: Connection closed by 10.0.0.1 port 59250 Feb 13 15:43:01.299012 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:01.303032 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:59250.service: Deactivated successfully. Feb 13 15:43:01.305084 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:43:01.305846 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:43:01.306850 systemd-logind[1495]: Removed session 11. Feb 13 15:43:01.532175 containerd[1509]: time="2025-02-13T15:43:01.532127727Z" level=error msg="Failed to destroy network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.533630 containerd[1509]: time="2025-02-13T15:43:01.532977841Z" level=error msg="encountered an error cleaning up failed sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.533630 containerd[1509]: time="2025-02-13T15:43:01.533043708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.533743 kubelet[2620]: E0213 15:43:01.533253 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.533743 kubelet[2620]: E0213 15:43:01.533333 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:01.533743 kubelet[2620]: E0213 15:43:01.533358 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:01.534104 kubelet[2620]: E0213 15:43:01.533396 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:43:01.612608 containerd[1509]: time="2025-02-13T15:43:01.612440147Z" level=error msg="Failed to destroy network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.670208 kubelet[2620]: I0213 15:43:01.670170 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a" Feb 13 15:43:01.716768 containerd[1509]: time="2025-02-13T15:43:01.612950805Z" level=error msg="encountered an error cleaning up failed sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.716918 containerd[1509]: time="2025-02-13T15:43:01.716824434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.716918 containerd[1509]: time="2025-02-13T15:43:01.671008042Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:01.717097 kubelet[2620]: E0213 15:43:01.717039 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.717176 kubelet[2620]: E0213 15:43:01.717106 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:43:01.717176 kubelet[2620]: E0213 15:43:01.717130 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:43:01.717240 containerd[1509]: time="2025-02-13T15:43:01.717131058Z" level=info msg="Ensure that sandbox e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a in task-service has been cleanup successfully" Feb 13 15:43:01.717322 kubelet[2620]: E0213 15:43:01.717173 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:43:01.717377 containerd[1509]: time="2025-02-13T15:43:01.717342015Z" level=info msg="TearDown network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" successfully" Feb 13 15:43:01.717377 containerd[1509]: time="2025-02-13T15:43:01.717356223Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" returns successfully" Feb 13 15:43:01.717889 containerd[1509]: time="2025-02-13T15:43:01.717846962Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:01.717969 containerd[1509]: time="2025-02-13T15:43:01.717928780Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:43:01.718033 containerd[1509]: time="2025-02-13T15:43:01.717969128Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:43:01.718257 containerd[1509]: time="2025-02-13T15:43:01.718212549Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:01.718391 containerd[1509]: time="2025-02-13T15:43:01.718294988Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:43:01.718391 containerd[1509]: time="2025-02-13T15:43:01.718337340Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:43:01.718733 containerd[1509]: time="2025-02-13T15:43:01.718707867Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:01.718864 containerd[1509]: time="2025-02-13T15:43:01.718836456Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:43:01.718864 containerd[1509]: time="2025-02-13T15:43:01.718855042Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:43:01.719394 containerd[1509]: time="2025-02-13T15:43:01.719275265Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:01.719394 containerd[1509]: time="2025-02-13T15:43:01.719367723Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:43:01.719394 containerd[1509]: time="2025-02-13T15:43:01.719376871Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:43:01.719623 containerd[1509]: time="2025-02-13T15:43:01.719602137Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:01.719687 containerd[1509]: time="2025-02-13T15:43:01.719670870Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:43:01.719687 containerd[1509]: time="2025-02-13T15:43:01.719682723Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:43:01.720282 containerd[1509]: time="2025-02-13T15:43:01.720256813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:6,}" Feb 13 15:43:01.794253 containerd[1509]: time="2025-02-13T15:43:01.794187039Z" level=error msg="Failed to destroy network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.794987 containerd[1509]: time="2025-02-13T15:43:01.794732133Z" level=error msg="encountered an error cleaning up failed sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.794987 containerd[1509]: time="2025-02-13T15:43:01.794802801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.795535 kubelet[2620]: E0213 15:43:01.795029 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.795535 kubelet[2620]: E0213 15:43:01.795082 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:43:01.795535 kubelet[2620]: E0213 15:43:01.795105 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:43:01.795736 kubelet[2620]: E0213 15:43:01.795141 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:43:01.848074 containerd[1509]: time="2025-02-13T15:43:01.847495111Z" level=error msg="Failed to destroy network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.848347 containerd[1509]: time="2025-02-13T15:43:01.848138996Z" level=error msg="encountered an error cleaning up failed sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.848347 containerd[1509]: time="2025-02-13T15:43:01.848204773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.848597 kubelet[2620]: E0213 15:43:01.848550 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.848709 kubelet[2620]: E0213 15:43:01.848629 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:01.848709 kubelet[2620]: E0213 15:43:01.848658 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:01.849386 kubelet[2620]: E0213 15:43:01.848855 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:43:01.865119 containerd[1509]: time="2025-02-13T15:43:01.865009135Z" level=error msg="Failed to destroy network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.865720 containerd[1509]: time="2025-02-13T15:43:01.865688078Z" level=error msg="encountered an error cleaning up failed sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.865776 containerd[1509]: time="2025-02-13T15:43:01.865753454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.866293 kubelet[2620]: E0213 15:43:01.866014 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.866293 kubelet[2620]: E0213 15:43:01.866088 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:01.866293 kubelet[2620]: E0213 15:43:01.866113 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:01.866631 kubelet[2620]: E0213 15:43:01.866155 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:43:01.898447 containerd[1509]: time="2025-02-13T15:43:01.898395506Z" level=error msg="Failed to destroy network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.898843 containerd[1509]: time="2025-02-13T15:43:01.898820839Z" level=error msg="encountered an error cleaning up failed sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.898908 containerd[1509]: time="2025-02-13T15:43:01.898889973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.899162 kubelet[2620]: E0213 15:43:01.899124 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.899502 kubelet[2620]: E0213 15:43:01.899460 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:01.899502 kubelet[2620]: E0213 15:43:01.899496 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:01.899706 kubelet[2620]: E0213 15:43:01.899540 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:43:01.928918 containerd[1509]: time="2025-02-13T15:43:01.928869014Z" level=error msg="Failed to destroy network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.929286 containerd[1509]: time="2025-02-13T15:43:01.929264780Z" level=error msg="encountered an error cleaning up failed sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.929376 containerd[1509]: time="2025-02-13T15:43:01.929351738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.929590 kubelet[2620]: E0213 15:43:01.929543 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:01.929638 kubelet[2620]: E0213 15:43:01.929603 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:01.929638 kubelet[2620]: E0213 15:43:01.929625 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:01.929698 kubelet[2620]: E0213 15:43:01.929662 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:43:02.458154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff-shm.mount: Deactivated successfully. Feb 13 15:43:02.458752 systemd[1]: run-netns-cni\x2d1aa33e5b\x2db6f4\x2daadc\x2dde01\x2df7b248e47636.mount: Deactivated successfully. Feb 13 15:43:02.458954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a-shm.mount: Deactivated successfully. Feb 13 15:43:02.684449 kubelet[2620]: I0213 15:43:02.684409 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858" Feb 13 15:43:02.686616 containerd[1509]: time="2025-02-13T15:43:02.686207860Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:02.686616 containerd[1509]: time="2025-02-13T15:43:02.686467742Z" level=info msg="Ensure that sandbox fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858 in task-service has been cleanup successfully" Feb 13 15:43:02.687396 containerd[1509]: time="2025-02-13T15:43:02.687375317Z" level=info msg="TearDown network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" successfully" Feb 13 15:43:02.687483 containerd[1509]: time="2025-02-13T15:43:02.687465742Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" returns successfully" Feb 13 15:43:02.688466 containerd[1509]: time="2025-02-13T15:43:02.688435466Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:02.688891 containerd[1509]: time="2025-02-13T15:43:02.688826703Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:43:02.688891 containerd[1509]: time="2025-02-13T15:43:02.688844477Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:43:02.689249 containerd[1509]: time="2025-02-13T15:43:02.689081926Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:02.689249 containerd[1509]: time="2025-02-13T15:43:02.689157443Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:43:02.689249 containerd[1509]: time="2025-02-13T15:43:02.689167221Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:43:02.689486 containerd[1509]: time="2025-02-13T15:43:02.689469636Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:02.689568 systemd[1]: run-netns-cni\x2d7f44931b\x2dc9f3\x2d2c06\x2d41e1\x2d95e313ba5f7e.mount: Deactivated successfully. Feb 13 15:43:02.689684 containerd[1509]: time="2025-02-13T15:43:02.689607403Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:43:02.689684 containerd[1509]: time="2025-02-13T15:43:02.689617983Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:43:02.690396 containerd[1509]: time="2025-02-13T15:43:02.690268561Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:02.690453 containerd[1509]: time="2025-02-13T15:43:02.690406388Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:43:02.690453 containerd[1509]: time="2025-02-13T15:43:02.690422329Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:43:02.691449 containerd[1509]: time="2025-02-13T15:43:02.690963965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:43:02.691953 kubelet[2620]: I0213 15:43:02.691913 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547" Feb 13 15:43:02.692871 containerd[1509]: time="2025-02-13T15:43:02.692828279Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" Feb 13 15:43:02.693114 containerd[1509]: time="2025-02-13T15:43:02.693087380Z" level=info msg="Ensure that sandbox 9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547 in task-service has been cleanup successfully" Feb 13 15:43:02.696406 kubelet[2620]: I0213 15:43:02.695445 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff" Feb 13 15:43:02.696213 systemd[1]: run-netns-cni\x2d31fe5cdf\x2d76ee\x2d9c0a\x2d1614\x2db2c3d51264dc.mount: Deactivated successfully. Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.695855011Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.696096448Z" level=info msg="Ensure that sandbox 67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff in task-service has been cleanup successfully" Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.696306354Z" level=info msg="TearDown network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" successfully" Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.696346722Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" returns successfully" Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.696507042Z" level=info msg="TearDown network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" successfully" Feb 13 15:43:02.696555 containerd[1509]: time="2025-02-13T15:43:02.696523153Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" returns successfully" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.696994615Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.697073027Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.697086853Z" level=info msg="TearDown network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" successfully" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.697103114Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" returns successfully" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.697144645Z" level=info msg="TearDown network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" successfully" Feb 13 15:43:02.697190 containerd[1509]: time="2025-02-13T15:43:02.697154403Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.697931807Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698005259Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698014056Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698066267Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698122485Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698156331Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698635277Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698738206Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698749979Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698799715Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698874189Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.698883277Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699152066Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699219326Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699227532Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699303158Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699398593Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:43:02.699439 containerd[1509]: time="2025-02-13T15:43:02.699412980Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:43:02.699692 systemd[1]: run-netns-cni\x2d8f2f0e71\x2db956\x2d0754\x2dd4c3\x2d825d470a6977.mount: Deactivated successfully. Feb 13 15:43:02.700647 containerd[1509]: time="2025-02-13T15:43:02.700481677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:43:02.700647 containerd[1509]: time="2025-02-13T15:43:02.700505753Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:02.700647 containerd[1509]: time="2025-02-13T15:43:02.700587311Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:43:02.700647 containerd[1509]: time="2025-02-13T15:43:02.700599535Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:43:02.703091 containerd[1509]: time="2025-02-13T15:43:02.701762974Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:02.703091 containerd[1509]: time="2025-02-13T15:43:02.701874649Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:43:02.703091 containerd[1509]: time="2025-02-13T15:43:02.701927022Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:43:02.703091 containerd[1509]: time="2025-02-13T15:43:02.702581927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:7,}" Feb 13 15:43:02.712365 kubelet[2620]: I0213 15:43:02.710545 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c" Feb 13 15:43:02.712456 containerd[1509]: time="2025-02-13T15:43:02.711013980Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" Feb 13 15:43:02.712456 containerd[1509]: time="2025-02-13T15:43:02.711280195Z" level=info msg="Ensure that sandbox 9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c in task-service has been cleanup successfully" Feb 13 15:43:02.712456 containerd[1509]: time="2025-02-13T15:43:02.711769660Z" level=info msg="TearDown network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" successfully" Feb 13 15:43:02.712456 containerd[1509]: time="2025-02-13T15:43:02.711785421Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" returns successfully" Feb 13 15:43:02.713661 systemd[1]: run-netns-cni\x2deac4ab58\x2d9793\x2d5b2e\x2d383d\x2d748b0c2a640f.mount: Deactivated successfully. Feb 13 15:43:02.714481 containerd[1509]: time="2025-02-13T15:43:02.714448199Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:43:02.714632 containerd[1509]: time="2025-02-13T15:43:02.714550397Z" level=info msg="TearDown network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" successfully" Feb 13 15:43:02.714632 containerd[1509]: time="2025-02-13T15:43:02.714571808Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" returns successfully" Feb 13 15:43:02.716220 containerd[1509]: time="2025-02-13T15:43:02.716173635Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:43:02.716289 containerd[1509]: time="2025-02-13T15:43:02.716260042Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:43:02.716289 containerd[1509]: time="2025-02-13T15:43:02.716281634Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:43:02.718979 containerd[1509]: time="2025-02-13T15:43:02.718876852Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:43:02.719078 containerd[1509]: time="2025-02-13T15:43:02.719047622Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:43:02.719078 containerd[1509]: time="2025-02-13T15:43:02.719066198Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:43:02.730116 containerd[1509]: time="2025-02-13T15:43:02.730061896Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:43:02.730243 containerd[1509]: time="2025-02-13T15:43:02.730179764Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:43:02.730243 containerd[1509]: time="2025-02-13T15:43:02.730193491Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:43:02.730550 kubelet[2620]: E0213 15:43:02.730523 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:02.730857 containerd[1509]: time="2025-02-13T15:43:02.730804563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:5,}" Feb 13 15:43:02.731471 kubelet[2620]: I0213 15:43:02.731446 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847" Feb 13 15:43:02.732003 containerd[1509]: time="2025-02-13T15:43:02.731973011Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:02.732211 containerd[1509]: time="2025-02-13T15:43:02.732185011Z" level=info msg="Ensure that sandbox 037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847 in task-service has been cleanup successfully" Feb 13 15:43:02.732801 containerd[1509]: time="2025-02-13T15:43:02.732643177Z" level=info msg="TearDown network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" successfully" Feb 13 15:43:02.732801 containerd[1509]: time="2025-02-13T15:43:02.732682143Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" returns successfully" Feb 13 15:43:02.733038 containerd[1509]: time="2025-02-13T15:43:02.733006890Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:02.733152 containerd[1509]: time="2025-02-13T15:43:02.733109037Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:43:02.733152 containerd[1509]: time="2025-02-13T15:43:02.733120119Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:43:02.733661 containerd[1509]: time="2025-02-13T15:43:02.733525453Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:02.733661 containerd[1509]: time="2025-02-13T15:43:02.733642700Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:43:02.733661 containerd[1509]: time="2025-02-13T15:43:02.733658010Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:43:02.734018 containerd[1509]: time="2025-02-13T15:43:02.733989731Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:02.734081 containerd[1509]: time="2025-02-13T15:43:02.734069185Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:43:02.734121 containerd[1509]: time="2025-02-13T15:43:02.734080786Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:43:02.734466 containerd[1509]: time="2025-02-13T15:43:02.734393031Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:02.734510 containerd[1509]: time="2025-02-13T15:43:02.734469929Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:43:02.734510 containerd[1509]: time="2025-02-13T15:43:02.734478987Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:43:02.735062 containerd[1509]: time="2025-02-13T15:43:02.734949165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:5,}" Feb 13 15:43:02.735656 kubelet[2620]: I0213 15:43:02.735631 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd" Feb 13 15:43:02.736079 containerd[1509]: time="2025-02-13T15:43:02.736049272Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:03.455855 systemd[1]: run-netns-cni\x2d9a941460\x2d68ba\x2d5a96\x2d2484\x2d7d43d2b81cc7.mount: Deactivated successfully. Feb 13 15:43:03.670259 containerd[1509]: time="2025-02-13T15:43:03.670182487Z" level=info msg="Ensure that sandbox 512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd in task-service has been cleanup successfully" Feb 13 15:43:03.676235 systemd[1]: run-netns-cni\x2dc202646f\x2df63c\x2df437\x2d0041\x2d492c30cc1c5b.mount: Deactivated successfully. Feb 13 15:43:03.692387 containerd[1509]: time="2025-02-13T15:43:03.691757456Z" level=info msg="TearDown network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" successfully" Feb 13 15:43:03.692387 containerd[1509]: time="2025-02-13T15:43:03.691799927Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" returns successfully" Feb 13 15:43:03.694461 containerd[1509]: time="2025-02-13T15:43:03.692867491Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:03.700791 containerd[1509]: time="2025-02-13T15:43:03.700713225Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:43:03.700791 containerd[1509]: time="2025-02-13T15:43:03.700780575Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:43:03.701389 containerd[1509]: time="2025-02-13T15:43:03.701184676Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:03.701389 containerd[1509]: time="2025-02-13T15:43:03.701276804Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:43:03.701389 containerd[1509]: time="2025-02-13T15:43:03.701305750Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:43:03.702058 containerd[1509]: time="2025-02-13T15:43:03.701744027Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:03.702058 containerd[1509]: time="2025-02-13T15:43:03.701881623Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:43:03.702058 containerd[1509]: time="2025-02-13T15:43:03.701919326Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:43:03.702235 containerd[1509]: time="2025-02-13T15:43:03.702199998Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:03.702340 containerd[1509]: time="2025-02-13T15:43:03.702303448Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:43:03.702378 containerd[1509]: time="2025-02-13T15:43:03.702340339Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:43:03.702537 kubelet[2620]: E0213 15:43:03.702515 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:03.703592 containerd[1509]: time="2025-02-13T15:43:03.703390289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:5,}" Feb 13 15:43:04.250940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059183755.mount: Deactivated successfully. Feb 13 15:43:04.610754 containerd[1509]: time="2025-02-13T15:43:04.610644723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:04.612984 containerd[1509]: time="2025-02-13T15:43:04.612945067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:43:04.615570 containerd[1509]: time="2025-02-13T15:43:04.615534190Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:04.625905 containerd[1509]: time="2025-02-13T15:43:04.625843760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:04.626881 containerd[1509]: time="2025-02-13T15:43:04.626675837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.052628986s" Feb 13 15:43:04.627047 containerd[1509]: time="2025-02-13T15:43:04.627007227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:43:04.642097 containerd[1509]: time="2025-02-13T15:43:04.642054251Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:43:04.681335 containerd[1509]: time="2025-02-13T15:43:04.679828896Z" level=info msg="CreateContainer within sandbox \"95e1d07ec69dd0bc8d2fd5c26df704f2390a5a07329e889609f55773e780054e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ca5a17eccfdaf140f12042e62c6bbe4f153c4c85b8366175c436ec429332e18\"" Feb 13 15:43:04.683779 containerd[1509]: time="2025-02-13T15:43:04.683578390Z" level=info msg="StartContainer for \"7ca5a17eccfdaf140f12042e62c6bbe4f153c4c85b8366175c436ec429332e18\"" Feb 13 15:43:04.704850 containerd[1509]: time="2025-02-13T15:43:04.704792689Z" level=error msg="Failed to destroy network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.705776 containerd[1509]: time="2025-02-13T15:43:04.705744738Z" level=error msg="encountered an error cleaning up failed sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.706742 containerd[1509]: time="2025-02-13T15:43:04.706716967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.707186 kubelet[2620]: E0213 15:43:04.707151 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.707591 kubelet[2620]: E0213 15:43:04.707571 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:04.707662 kubelet[2620]: E0213 15:43:04.707646 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:04.707781 kubelet[2620]: E0213 15:43:04.707746 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:43:04.714181 containerd[1509]: time="2025-02-13T15:43:04.714128407Z" level=error msg="Failed to destroy network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.714788 containerd[1509]: time="2025-02-13T15:43:04.714751421Z" level=error msg="encountered an error cleaning up failed sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.714927 containerd[1509]: time="2025-02-13T15:43:04.714897162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.715299 kubelet[2620]: E0213 15:43:04.715244 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.715374 kubelet[2620]: E0213 15:43:04.715346 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:04.715409 kubelet[2620]: E0213 15:43:04.715373 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:04.715459 kubelet[2620]: E0213 15:43:04.715422 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:43:04.735369 containerd[1509]: time="2025-02-13T15:43:04.735255186Z" level=error msg="Failed to destroy network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.736953 containerd[1509]: time="2025-02-13T15:43:04.736930153Z" level=error msg="encountered an error cleaning up failed sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.737085 containerd[1509]: time="2025-02-13T15:43:04.737066256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.737562 kubelet[2620]: E0213 15:43:04.737512 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.737624 kubelet[2620]: E0213 15:43:04.737582 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:04.737624 kubelet[2620]: E0213 15:43:04.737604 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:04.737676 kubelet[2620]: E0213 15:43:04.737647 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:43:04.739794 containerd[1509]: time="2025-02-13T15:43:04.739661961Z" level=error msg="Failed to destroy network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.740228 containerd[1509]: time="2025-02-13T15:43:04.740202926Z" level=error msg="encountered an error cleaning up failed sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.740331 containerd[1509]: time="2025-02-13T15:43:04.740298872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.740663 kubelet[2620]: E0213 15:43:04.740632 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.740704 kubelet[2620]: E0213 15:43:04.740678 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:43:04.740742 kubelet[2620]: E0213 15:43:04.740702 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh42t" Feb 13 15:43:04.740773 kubelet[2620]: E0213 15:43:04.740752 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh42t_kube-system(ba5086be-6515-43a3-aac4-e336b6f0df11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh42t" podUID="ba5086be-6515-43a3-aac4-e336b6f0df11" Feb 13 15:43:04.746114 kubelet[2620]: I0213 15:43:04.745928 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8" Feb 13 15:43:04.746291 containerd[1509]: time="2025-02-13T15:43:04.746246944Z" level=error msg="Failed to destroy network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.746913 containerd[1509]: time="2025-02-13T15:43:04.746815422Z" level=error msg="encountered an error cleaning up failed sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.746913 containerd[1509]: time="2025-02-13T15:43:04.746899836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.747348 kubelet[2620]: E0213 15:43:04.747288 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.747501 kubelet[2620]: E0213 15:43:04.747368 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:43:04.747501 kubelet[2620]: E0213 15:43:04.747394 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" Feb 13 15:43:04.747564 kubelet[2620]: E0213 15:43:04.747479 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-nvw2j_calico-apiserver(41ec69cc-3a06-44c4-8295-0752449d5e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podUID="41ec69cc-3a06-44c4-8295-0752449d5e76" Feb 13 15:43:04.748563 containerd[1509]: time="2025-02-13T15:43:04.748349577Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" Feb 13 15:43:04.748828 containerd[1509]: time="2025-02-13T15:43:04.748802642Z" level=info msg="Ensure that sandbox eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8 in task-service has been cleanup successfully" Feb 13 15:43:04.749053 containerd[1509]: time="2025-02-13T15:43:04.749037666Z" level=info msg="TearDown network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" successfully" Feb 13 15:43:04.749096 containerd[1509]: time="2025-02-13T15:43:04.749051633Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" returns successfully" Feb 13 15:43:04.750180 containerd[1509]: time="2025-02-13T15:43:04.750157339Z" level=error msg="Failed to destroy network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.750380 containerd[1509]: time="2025-02-13T15:43:04.750208278Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:04.750576 containerd[1509]: time="2025-02-13T15:43:04.750508367Z" level=info msg="TearDown network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" successfully" Feb 13 15:43:04.750576 containerd[1509]: time="2025-02-13T15:43:04.750551872Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" returns successfully" Feb 13 15:43:04.750752 containerd[1509]: time="2025-02-13T15:43:04.750664419Z" level=error msg="encountered an error cleaning up failed sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.750752 containerd[1509]: time="2025-02-13T15:43:04.750704476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.751035 kubelet[2620]: E0213 15:43:04.750996 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.751159 kubelet[2620]: E0213 15:43:04.751042 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:04.751159 kubelet[2620]: E0213 15:43:04.751063 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtt2x" Feb 13 15:43:04.751159 kubelet[2620]: E0213 15:43:04.751101 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtt2x_calico-system(fb72e478-27f9-4b96-ac63-312fc0de0c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtt2x" podUID="fb72e478-27f9-4b96-ac63-312fc0de0c3b" Feb 13 15:43:04.751286 containerd[1509]: time="2025-02-13T15:43:04.751254319Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:04.751547 containerd[1509]: time="2025-02-13T15:43:04.751451690Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:43:04.751547 containerd[1509]: time="2025-02-13T15:43:04.751479464Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:43:04.752103 containerd[1509]: time="2025-02-13T15:43:04.751938451Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:04.752103 containerd[1509]: time="2025-02-13T15:43:04.752040047Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:43:04.752103 containerd[1509]: time="2025-02-13T15:43:04.752054195Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:43:04.752567 containerd[1509]: time="2025-02-13T15:43:04.752546736Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:04.752794 kubelet[2620]: I0213 15:43:04.752629 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f" Feb 13 15:43:04.752855 containerd[1509]: time="2025-02-13T15:43:04.752717536Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:43:04.752855 containerd[1509]: time="2025-02-13T15:43:04.752733187Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:43:04.753165 containerd[1509]: time="2025-02-13T15:43:04.753120725Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" Feb 13 15:43:04.753391 containerd[1509]: time="2025-02-13T15:43:04.753367722Z" level=info msg="Ensure that sandbox a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f in task-service has been cleanup successfully" Feb 13 15:43:04.753638 containerd[1509]: time="2025-02-13T15:43:04.753617344Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:04.754490 containerd[1509]: time="2025-02-13T15:43:04.753713611Z" level=info msg="TearDown network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" successfully" Feb 13 15:43:04.754568 containerd[1509]: time="2025-02-13T15:43:04.754554045Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" returns successfully" Feb 13 15:43:04.754680 containerd[1509]: time="2025-02-13T15:43:04.754526331Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:43:04.754774 containerd[1509]: time="2025-02-13T15:43:04.754761065Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:43:04.754990 containerd[1509]: time="2025-02-13T15:43:04.754961041Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:04.755106 containerd[1509]: time="2025-02-13T15:43:04.755084509Z" level=info msg="TearDown network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" successfully" Feb 13 15:43:04.755136 containerd[1509]: time="2025-02-13T15:43:04.755106161Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" returns successfully" Feb 13 15:43:04.759336 kubelet[2620]: E0213 15:43:04.755734 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.755933831Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.756028984Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.756039565Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.756593726Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758435584Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758452296Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758604381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:6,}" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758775471Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758852590Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.758885644Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.759114165Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.759188288Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:43:04.759440 containerd[1509]: time="2025-02-13T15:43:04.759196555Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:43:04.759730 containerd[1509]: time="2025-02-13T15:43:04.759600455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:6,}" Feb 13 15:43:04.785473 systemd[1]: Started cri-containerd-7ca5a17eccfdaf140f12042e62c6bbe4f153c4c85b8366175c436ec429332e18.scope - libcontainer container 7ca5a17eccfdaf140f12042e62c6bbe4f153c4c85b8366175c436ec429332e18. Feb 13 15:43:04.819458 kubelet[2620]: I0213 15:43:04.819005 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a" Feb 13 15:43:04.819637 containerd[1509]: time="2025-02-13T15:43:04.819599491Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" Feb 13 15:43:04.819890 containerd[1509]: time="2025-02-13T15:43:04.819852169Z" level=info msg="Ensure that sandbox 2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a in task-service has been cleanup successfully" Feb 13 15:43:04.820358 containerd[1509]: time="2025-02-13T15:43:04.820239668Z" level=info msg="TearDown network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" successfully" Feb 13 15:43:04.820358 containerd[1509]: time="2025-02-13T15:43:04.820259586Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" returns successfully" Feb 13 15:43:04.820698 containerd[1509]: time="2025-02-13T15:43:04.820620563Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:04.820813 containerd[1509]: time="2025-02-13T15:43:04.820719675Z" level=info msg="TearDown network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" successfully" Feb 13 15:43:04.820813 containerd[1509]: time="2025-02-13T15:43:04.820762648Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" returns successfully" Feb 13 15:43:04.821333 containerd[1509]: time="2025-02-13T15:43:04.821091493Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:04.821333 containerd[1509]: time="2025-02-13T15:43:04.821254288Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:43:04.821333 containerd[1509]: time="2025-02-13T15:43:04.821265960Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:43:04.821888 containerd[1509]: time="2025-02-13T15:43:04.821809049Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:04.822366 containerd[1509]: time="2025-02-13T15:43:04.821909163Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:43:04.822366 containerd[1509]: time="2025-02-13T15:43:04.822047320Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:43:04.823100 containerd[1509]: time="2025-02-13T15:43:04.822890108Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:04.823100 containerd[1509]: time="2025-02-13T15:43:04.822969252Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:43:04.823100 containerd[1509]: time="2025-02-13T15:43:04.822978479Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:43:04.825533 containerd[1509]: time="2025-02-13T15:43:04.825502106Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:04.825795 containerd[1509]: time="2025-02-13T15:43:04.825707281Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:43:04.825795 containerd[1509]: time="2025-02-13T15:43:04.825721459Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:43:04.827306 containerd[1509]: time="2025-02-13T15:43:04.826865209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:43:04.838934 containerd[1509]: time="2025-02-13T15:43:04.838878400Z" level=info msg="StartContainer for \"7ca5a17eccfdaf140f12042e62c6bbe4f153c4c85b8366175c436ec429332e18\" returns successfully" Feb 13 15:43:04.966023 containerd[1509]: time="2025-02-13T15:43:04.965953706Z" level=error msg="Failed to destroy network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.966452 containerd[1509]: time="2025-02-13T15:43:04.966414636Z" level=error msg="encountered an error cleaning up failed sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.966560 containerd[1509]: time="2025-02-13T15:43:04.966484040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.967114 kubelet[2620]: E0213 15:43:04.966766 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.967114 kubelet[2620]: E0213 15:43:04.966836 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:04.967114 kubelet[2620]: E0213 15:43:04.966858 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" Feb 13 15:43:04.967262 kubelet[2620]: E0213 15:43:04.966905 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677779d7f9-hvmnx_calico-system(75e9cbc2-59c3-4f8f-a732-f7aed42e478e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podUID="75e9cbc2-59c3-4f8f-a732-f7aed42e478e" Feb 13 15:43:04.967646 containerd[1509]: time="2025-02-13T15:43:04.967612211Z" level=error msg="Failed to destroy network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.968085 containerd[1509]: time="2025-02-13T15:43:04.968063022Z" level=error msg="encountered an error cleaning up failed sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.968198 containerd[1509]: time="2025-02-13T15:43:04.968179526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.968490 kubelet[2620]: E0213 15:43:04.968387 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.968490 kubelet[2620]: E0213 15:43:04.968415 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:04.968490 kubelet[2620]: E0213 15:43:04.968429 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lp99" Feb 13 15:43:04.968591 kubelet[2620]: E0213 15:43:04.968457 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lp99_kube-system(5fc4b308-ecdc-4011-b8c8-b82b8a22d611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lp99" podUID="5fc4b308-ecdc-4011-b8c8-b82b8a22d611" Feb 13 15:43:04.973148 containerd[1509]: time="2025-02-13T15:43:04.973097008Z" level=error msg="Failed to destroy network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.974670 containerd[1509]: time="2025-02-13T15:43:04.973654315Z" level=error msg="encountered an error cleaning up failed sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.974670 containerd[1509]: time="2025-02-13T15:43:04.973769998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.974890 kubelet[2620]: E0213 15:43:04.974449 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:43:04.974890 kubelet[2620]: E0213 15:43:04.974519 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:04.974890 kubelet[2620]: E0213 15:43:04.974541 2620 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" Feb 13 15:43:04.975020 kubelet[2620]: E0213 15:43:04.974592 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57b5c7b97f-ldpr5_calico-apiserver(34518299-f628-42e5-806b-51d2fa3ce346)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podUID="34518299-f628-42e5-806b-51d2fa3ce346" Feb 13 15:43:05.011456 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:43:05.012296 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:43:05.544504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81-shm.mount: Deactivated successfully. Feb 13 15:43:05.544648 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476-shm.mount: Deactivated successfully. Feb 13 15:43:05.544760 systemd[1]: run-netns-cni\x2d30d1d9fa\x2d9649\x2dda34\x2d10a3\x2d96177e8f15b3.mount: Deactivated successfully. Feb 13 15:43:05.544901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a-shm.mount: Deactivated successfully. Feb 13 15:43:05.545006 systemd[1]: run-netns-cni\x2db3417527\x2d4e4d\x2d3023\x2d9134\x2d4cf492ddbcfd.mount: Deactivated successfully. Feb 13 15:43:05.545098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f-shm.mount: Deactivated successfully. Feb 13 15:43:05.824693 kubelet[2620]: E0213 15:43:05.824264 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:05.828269 kubelet[2620]: I0213 15:43:05.827873 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6" Feb 13 15:43:05.828562 containerd[1509]: time="2025-02-13T15:43:05.828519495Z" level=info msg="StopPodSandbox for \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\"" Feb 13 15:43:05.829056 containerd[1509]: time="2025-02-13T15:43:05.828732707Z" level=info msg="Ensure that sandbox faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6 in task-service has been cleanup successfully" Feb 13 15:43:05.829573 containerd[1509]: time="2025-02-13T15:43:05.829520818Z" level=info msg="TearDown network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" successfully" Feb 13 15:43:05.832363 containerd[1509]: time="2025-02-13T15:43:05.829736937Z" level=info msg="StopPodSandbox for \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" returns successfully" Feb 13 15:43:05.832570 containerd[1509]: time="2025-02-13T15:43:05.832536223Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" Feb 13 15:43:05.832649 containerd[1509]: time="2025-02-13T15:43:05.832627229Z" level=info msg="TearDown network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" successfully" Feb 13 15:43:05.832649 containerd[1509]: time="2025-02-13T15:43:05.832642368Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" returns successfully" Feb 13 15:43:05.833039 containerd[1509]: time="2025-02-13T15:43:05.833005590Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:05.833138 containerd[1509]: time="2025-02-13T15:43:05.833082027Z" level=info msg="TearDown network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" successfully" Feb 13 15:43:05.833138 containerd[1509]: time="2025-02-13T15:43:05.833132465Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" returns successfully" Feb 13 15:43:05.833519 containerd[1509]: time="2025-02-13T15:43:05.833489023Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:05.833908 containerd[1509]: time="2025-02-13T15:43:05.833581201Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:43:05.833908 containerd[1509]: time="2025-02-13T15:43:05.833592814Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:43:05.834044 containerd[1509]: time="2025-02-13T15:43:05.834019348Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:05.834350 containerd[1509]: time="2025-02-13T15:43:05.834288218Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:43:05.834350 containerd[1509]: time="2025-02-13T15:43:05.834335448Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:43:05.834617 systemd[1]: run-netns-cni\x2d480389d9\x2d40a2\x2dca5d\x2d9923\x2d16fd1ac07b58.mount: Deactivated successfully. Feb 13 15:43:05.835954 containerd[1509]: time="2025-02-13T15:43:05.835759780Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:05.835954 containerd[1509]: time="2025-02-13T15:43:05.835840956Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:43:05.835954 containerd[1509]: time="2025-02-13T15:43:05.835852038Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:43:05.836608 containerd[1509]: time="2025-02-13T15:43:05.836563842Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:05.836699 kubelet[2620]: I0213 15:43:05.836671 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476" Feb 13 15:43:05.836745 containerd[1509]: time="2025-02-13T15:43:05.836722379Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:43:05.836745 containerd[1509]: time="2025-02-13T15:43:05.836737238Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:43:05.837303 containerd[1509]: time="2025-02-13T15:43:05.837268213Z" level=info msg="StopPodSandbox for \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\"" Feb 13 15:43:05.837753 containerd[1509]: time="2025-02-13T15:43:05.837718944Z" level=info msg="Ensure that sandbox 18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476 in task-service has been cleanup successfully" Feb 13 15:43:05.838843 containerd[1509]: time="2025-02-13T15:43:05.838466748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:7,}" Feb 13 15:43:05.839067 containerd[1509]: time="2025-02-13T15:43:05.839046448Z" level=info msg="TearDown network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" successfully" Feb 13 15:43:05.839134 containerd[1509]: time="2025-02-13T15:43:05.839113738Z" level=info msg="StopPodSandbox for \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" returns successfully" Feb 13 15:43:05.840541 containerd[1509]: time="2025-02-13T15:43:05.840507510Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" Feb 13 15:43:05.840810 containerd[1509]: time="2025-02-13T15:43:05.840720301Z" level=info msg="TearDown network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" successfully" Feb 13 15:43:05.840810 containerd[1509]: time="2025-02-13T15:43:05.840734699Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" returns successfully" Feb 13 15:43:05.840964 systemd[1]: run-netns-cni\x2d6183e1eb\x2de972\x2d442d\x2dd45e\x2d24cec0c1f0ff.mount: Deactivated successfully. Feb 13 15:43:05.841805 containerd[1509]: time="2025-02-13T15:43:05.841525006Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:05.841805 containerd[1509]: time="2025-02-13T15:43:05.841622975Z" level=info msg="TearDown network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" successfully" Feb 13 15:43:05.841805 containerd[1509]: time="2025-02-13T15:43:05.841633225Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" returns successfully" Feb 13 15:43:05.842718 containerd[1509]: time="2025-02-13T15:43:05.842695296Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:05.843011 containerd[1509]: time="2025-02-13T15:43:05.842963284Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:43:05.843373 containerd[1509]: time="2025-02-13T15:43:05.843199439Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:43:05.843809 containerd[1509]: time="2025-02-13T15:43:05.843775442Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:05.844290 kubelet[2620]: I0213 15:43:05.843889 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa" Feb 13 15:43:05.844397 containerd[1509]: time="2025-02-13T15:43:05.843962915Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:43:05.844397 containerd[1509]: time="2025-02-13T15:43:05.843977303Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:43:05.844397 containerd[1509]: time="2025-02-13T15:43:05.844294926Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:05.844397 containerd[1509]: time="2025-02-13T15:43:05.844394298Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:43:05.844559 containerd[1509]: time="2025-02-13T15:43:05.844408235Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:43:05.844651 containerd[1509]: time="2025-02-13T15:43:05.844622849Z" level=info msg="StopPodSandbox for \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\"" Feb 13 15:43:05.844880 containerd[1509]: time="2025-02-13T15:43:05.844850239Z" level=info msg="Ensure that sandbox 58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa in task-service has been cleanup successfully" Feb 13 15:43:05.845130 containerd[1509]: time="2025-02-13T15:43:05.845091425Z" level=info msg="TearDown network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" successfully" Feb 13 15:43:05.845130 containerd[1509]: time="2025-02-13T15:43:05.845112375Z" level=info msg="StopPodSandbox for \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" returns successfully" Feb 13 15:43:05.845286 containerd[1509]: time="2025-02-13T15:43:05.845260701Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:05.845760 containerd[1509]: time="2025-02-13T15:43:05.845357769Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:43:05.845760 containerd[1509]: time="2025-02-13T15:43:05.845370464Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:43:05.846235 containerd[1509]: time="2025-02-13T15:43:05.846205256Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:05.846305 containerd[1509]: time="2025-02-13T15:43:05.846289980Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:43:05.846354 containerd[1509]: time="2025-02-13T15:43:05.846301732Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:43:05.846395 containerd[1509]: time="2025-02-13T15:43:05.846353702Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" Feb 13 15:43:05.846458 containerd[1509]: time="2025-02-13T15:43:05.846434018Z" level=info msg="TearDown network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" successfully" Feb 13 15:43:05.846458 containerd[1509]: time="2025-02-13T15:43:05.846452764Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" returns successfully" Feb 13 15:43:05.847042 containerd[1509]: time="2025-02-13T15:43:05.847017836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:8,}" Feb 13 15:43:05.847397 containerd[1509]: time="2025-02-13T15:43:05.847210528Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:43:05.847397 containerd[1509]: time="2025-02-13T15:43:05.847296885Z" level=info msg="TearDown network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" successfully" Feb 13 15:43:05.847397 containerd[1509]: time="2025-02-13T15:43:05.847309999Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" returns successfully" Feb 13 15:43:05.848861 systemd[1]: run-netns-cni\x2d1a41b84b\x2d809b\x2daea8\x2d159d\x2d32f224b2080a.mount: Deactivated successfully. Feb 13 15:43:05.851206 containerd[1509]: time="2025-02-13T15:43:05.851134106Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:43:05.851445 containerd[1509]: time="2025-02-13T15:43:05.851419537Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:43:05.851445 containerd[1509]: time="2025-02-13T15:43:05.851442102Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:43:05.851997 containerd[1509]: time="2025-02-13T15:43:05.851889195Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:43:05.852150 containerd[1509]: time="2025-02-13T15:43:05.851998015Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:43:05.852150 containerd[1509]: time="2025-02-13T15:43:05.852011591Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:43:05.853484 containerd[1509]: time="2025-02-13T15:43:05.852665534Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:43:05.853484 containerd[1509]: time="2025-02-13T15:43:05.852768103Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:43:05.853484 containerd[1509]: time="2025-02-13T15:43:05.852783462Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:43:05.853593 kubelet[2620]: I0213 15:43:05.853463 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81" Feb 13 15:43:05.853868 containerd[1509]: time="2025-02-13T15:43:05.853826076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:43:05.854387 containerd[1509]: time="2025-02-13T15:43:05.853996264Z" level=info msg="StopPodSandbox for \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\"" Feb 13 15:43:05.854387 containerd[1509]: time="2025-02-13T15:43:05.854276586Z" level=info msg="Ensure that sandbox a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81 in task-service has been cleanup successfully" Feb 13 15:43:05.857276 systemd[1]: run-netns-cni\x2d9d673541\x2dfed7\x2d2665\x2da483\x2d2c227e08b78c.mount: Deactivated successfully. Feb 13 15:43:05.857641 containerd[1509]: time="2025-02-13T15:43:05.857607250Z" level=info msg="TearDown network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" successfully" Feb 13 15:43:05.857641 containerd[1509]: time="2025-02-13T15:43:05.857631807Z" level=info msg="StopPodSandbox for \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" returns successfully" Feb 13 15:43:05.858221 containerd[1509]: time="2025-02-13T15:43:05.858157813Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" Feb 13 15:43:05.858286 containerd[1509]: time="2025-02-13T15:43:05.858268947Z" level=info msg="TearDown network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" successfully" Feb 13 15:43:05.858342 containerd[1509]: time="2025-02-13T15:43:05.858282273Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" returns successfully" Feb 13 15:43:05.858702 containerd[1509]: time="2025-02-13T15:43:05.858680603Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:43:05.858977 containerd[1509]: time="2025-02-13T15:43:05.858776718Z" level=info msg="TearDown network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" successfully" Feb 13 15:43:05.858977 containerd[1509]: time="2025-02-13T15:43:05.858795384Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" returns successfully" Feb 13 15:43:05.859389 containerd[1509]: time="2025-02-13T15:43:05.859230976Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:43:05.859891 containerd[1509]: time="2025-02-13T15:43:05.859817679Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:43:05.859891 containerd[1509]: time="2025-02-13T15:43:05.859835734Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:43:05.860220 containerd[1509]: time="2025-02-13T15:43:05.860194578Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:43:05.860337 containerd[1509]: time="2025-02-13T15:43:05.860292376Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:43:05.860433 containerd[1509]: time="2025-02-13T15:43:05.860310371Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:43:05.860792 containerd[1509]: time="2025-02-13T15:43:05.860771531Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:43:05.860912 containerd[1509]: time="2025-02-13T15:43:05.860872757Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:43:05.860912 containerd[1509]: time="2025-02-13T15:43:05.860890872Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:43:05.861350 kubelet[2620]: E0213 15:43:05.861077 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:05.861536 containerd[1509]: time="2025-02-13T15:43:05.861510970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:6,}" Feb 13 15:43:05.862559 kubelet[2620]: I0213 15:43:05.862539 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566" Feb 13 15:43:05.863359 containerd[1509]: time="2025-02-13T15:43:05.863240841Z" level=info msg="StopPodSandbox for \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\"" Feb 13 15:43:05.863643 containerd[1509]: time="2025-02-13T15:43:05.863593232Z" level=info msg="Ensure that sandbox 35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566 in task-service has been cleanup successfully" Feb 13 15:43:05.863838 containerd[1509]: time="2025-02-13T15:43:05.863813438Z" level=info msg="TearDown network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" successfully" Feb 13 15:43:05.863838 containerd[1509]: time="2025-02-13T15:43:05.863834268Z" level=info msg="StopPodSandbox for \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" returns successfully" Feb 13 15:43:05.864110 containerd[1509]: time="2025-02-13T15:43:05.864070744Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" Feb 13 15:43:05.864186 containerd[1509]: time="2025-02-13T15:43:05.864166670Z" level=info msg="TearDown network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" successfully" Feb 13 15:43:05.864186 containerd[1509]: time="2025-02-13T15:43:05.864181848Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" returns successfully" Feb 13 15:43:05.864748 containerd[1509]: time="2025-02-13T15:43:05.864635835Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:05.864748 containerd[1509]: time="2025-02-13T15:43:05.864728795Z" level=info msg="TearDown network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" successfully" Feb 13 15:43:05.864748 containerd[1509]: time="2025-02-13T15:43:05.864740938Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" returns successfully" Feb 13 15:43:05.865197 containerd[1509]: time="2025-02-13T15:43:05.865175829Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:05.865283 containerd[1509]: time="2025-02-13T15:43:05.865268037Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:43:05.865335 containerd[1509]: time="2025-02-13T15:43:05.865282425Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:43:05.865717 containerd[1509]: time="2025-02-13T15:43:05.865697326Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:05.865794 containerd[1509]: time="2025-02-13T15:43:05.865779014Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:43:05.865844 containerd[1509]: time="2025-02-13T15:43:05.865792220Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:43:05.866438 containerd[1509]: time="2025-02-13T15:43:05.866285112Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:05.866543 containerd[1509]: time="2025-02-13T15:43:05.866463196Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:43:05.866543 containerd[1509]: time="2025-02-13T15:43:05.866481060Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:43:05.866723 containerd[1509]: time="2025-02-13T15:43:05.866701205Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:05.866810 containerd[1509]: time="2025-02-13T15:43:05.866791689Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:43:05.866856 containerd[1509]: time="2025-02-13T15:43:05.866807560Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:43:05.867385 containerd[1509]: time="2025-02-13T15:43:05.867362102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:7,}" Feb 13 15:43:05.867639 kubelet[2620]: I0213 15:43:05.867586 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a" Feb 13 15:43:05.868024 containerd[1509]: time="2025-02-13T15:43:05.867987129Z" level=info msg="StopPodSandbox for \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\"" Feb 13 15:43:05.868189 containerd[1509]: time="2025-02-13T15:43:05.868165413Z" level=info msg="Ensure that sandbox edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a in task-service has been cleanup successfully" Feb 13 15:43:05.868432 containerd[1509]: time="2025-02-13T15:43:05.868408783Z" level=info msg="TearDown network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" successfully" Feb 13 15:43:05.868432 containerd[1509]: time="2025-02-13T15:43:05.868427580Z" level=info msg="StopPodSandbox for \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" returns successfully" Feb 13 15:43:05.868808 containerd[1509]: time="2025-02-13T15:43:05.868778217Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" Feb 13 15:43:05.868941 containerd[1509]: time="2025-02-13T15:43:05.868917085Z" level=info msg="TearDown network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" successfully" Feb 13 15:43:05.868941 containerd[1509]: time="2025-02-13T15:43:05.868938416Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" returns successfully" Feb 13 15:43:05.869214 containerd[1509]: time="2025-02-13T15:43:05.869188980Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:05.869332 containerd[1509]: time="2025-02-13T15:43:05.869293502Z" level=info msg="TearDown network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" successfully" Feb 13 15:43:05.869380 containerd[1509]: time="2025-02-13T15:43:05.869328660Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" returns successfully" Feb 13 15:43:05.869657 containerd[1509]: time="2025-02-13T15:43:05.869626636Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:05.869757 containerd[1509]: time="2025-02-13T15:43:05.869730366Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:43:05.869757 containerd[1509]: time="2025-02-13T15:43:05.869751196Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:43:05.870055 containerd[1509]: time="2025-02-13T15:43:05.870031007Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:05.870148 containerd[1509]: time="2025-02-13T15:43:05.870127573Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:43:05.870148 containerd[1509]: time="2025-02-13T15:43:05.870145307Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:43:05.870748 containerd[1509]: time="2025-02-13T15:43:05.870529730Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:05.870748 containerd[1509]: time="2025-02-13T15:43:05.870689248Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:43:05.870748 containerd[1509]: time="2025-02-13T15:43:05.870704287Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:43:05.871131 containerd[1509]: time="2025-02-13T15:43:05.871085474Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:05.871215 containerd[1509]: time="2025-02-13T15:43:05.871186949Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:43:05.871424 containerd[1509]: time="2025-02-13T15:43:05.871382247Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:43:05.871614 kubelet[2620]: E0213 15:43:05.871588 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:05.871862 containerd[1509]: time="2025-02-13T15:43:05.871838448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:7,}" Feb 13 15:43:06.316040 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:59262.service - OpenSSH per-connection server daemon (10.0.0.1:59262). Feb 13 15:43:06.405666 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 59262 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:06.407368 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:06.412327 systemd-logind[1495]: New session 12 of user core. Feb 13 15:43:06.423430 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:43:06.534609 systemd[1]: run-netns-cni\x2d529b8f90\x2d53b6\x2dd405\x2d9f5f\x2d485757359c47.mount: Deactivated successfully. Feb 13 15:43:06.534721 systemd[1]: run-netns-cni\x2d107336d6\x2de1b5\x2d1cbf\x2d36a5\x2d2271a61d0c28.mount: Deactivated successfully. Feb 13 15:43:06.610391 sshd[5136]: Connection closed by 10.0.0.1 port 59262 Feb 13 15:43:06.611951 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:06.625942 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:59262.service: Deactivated successfully. Feb 13 15:43:06.631835 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:43:06.647173 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:43:06.654727 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:59268.service - OpenSSH per-connection server daemon (10.0.0.1:59268). Feb 13 15:43:06.655896 systemd-logind[1495]: Removed session 12. Feb 13 15:43:06.696717 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 59268 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:06.697885 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:06.704387 systemd-logind[1495]: New session 13 of user core. Feb 13 15:43:06.709773 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:43:06.820435 kernel: bpftool[5282]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:43:06.869546 kubelet[2620]: E0213 15:43:06.869387 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:07.067515 systemd-networkd[1431]: vxlan.calico: Link UP Feb 13 15:43:07.067623 systemd-networkd[1431]: vxlan.calico: Gained carrier Feb 13 15:43:07.120540 sshd[5254]: Connection closed by 10.0.0.1 port 59268 Feb 13 15:43:07.122750 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:07.132098 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:59268.service: Deactivated successfully. Feb 13 15:43:07.134381 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:43:07.135555 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:43:07.147479 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:59276.service - OpenSSH per-connection server daemon (10.0.0.1:59276). Feb 13 15:43:07.149242 systemd-logind[1495]: Removed session 13. Feb 13 15:43:07.209619 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 59276 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:07.212452 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:07.218417 systemd-logind[1495]: New session 14 of user core. Feb 13 15:43:07.226615 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:43:07.400587 sshd[5377]: Connection closed by 10.0.0.1 port 59276 Feb 13 15:43:07.402015 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:07.408281 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:59276.service: Deactivated successfully. Feb 13 15:43:07.412419 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:43:07.419133 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:43:07.421212 systemd-logind[1495]: Removed session 14. Feb 13 15:43:07.460952 systemd-networkd[1431]: calib593fcef869: Link UP Feb 13 15:43:07.461187 systemd-networkd[1431]: calib593fcef869: Gained carrier Feb 13 15:43:07.481637 kubelet[2620]: I0213 15:43:07.481551 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-chxfg" podStartSLOduration=5.537903381 podStartE2EDuration="35.481528534s" podCreationTimestamp="2025-02-13 15:42:32 +0000 UTC" firstStartedPulling="2025-02-13 15:42:34.68646445 +0000 UTC m=+20.322228575" lastFinishedPulling="2025-02-13 15:43:04.630089603 +0000 UTC m=+50.265853728" observedRunningTime="2025-02-13 15:43:05.849984786 +0000 UTC m=+51.485748911" watchObservedRunningTime="2025-02-13 15:43:07.481528534 +0000 UTC m=+53.117292659" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.104 [INFO][5312] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0 calico-apiserver-57b5c7b97f- calico-apiserver 34518299-f628-42e5-806b-51d2fa3ce346 795 0 2025-02-13 15:42:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b5c7b97f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57b5c7b97f-ldpr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib593fcef869 [] []}} ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.107 [INFO][5312] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.219 [INFO][5365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" HandleID="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.389 [INFO][5365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" HandleID="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002411c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57b5c7b97f-ldpr5", "timestamp":"2025-02-13 15:43:07.219006277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.389 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.389 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.390 [INFO][5365] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.393 [INFO][5365] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.403 [INFO][5365] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.408 [INFO][5365] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.412 [INFO][5365] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.415 [INFO][5365] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.415 [INFO][5365] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.417 [INFO][5365] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9 Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.428 [INFO][5365] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.451 [INFO][5365] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.451 [INFO][5365] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" host="localhost" Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.451 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:07.487307 containerd[1509]: 2025-02-13 15:43:07.451 [INFO][5365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" HandleID="k8s-pod-network.7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.455 [INFO][5312] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0", GenerateName:"calico-apiserver-57b5c7b97f-", Namespace:"calico-apiserver", SelfLink:"", UID:"34518299-f628-42e5-806b-51d2fa3ce346", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b5c7b97f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57b5c7b97f-ldpr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib593fcef869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.456 [INFO][5312] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.456 [INFO][5312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib593fcef869 ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.460 [INFO][5312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.460 [INFO][5312] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0", GenerateName:"calico-apiserver-57b5c7b97f-", Namespace:"calico-apiserver", SelfLink:"", UID:"34518299-f628-42e5-806b-51d2fa3ce346", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b5c7b97f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9", Pod:"calico-apiserver-57b5c7b97f-ldpr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib593fcef869", MAC:"ea:71:31:fd:25:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:07.488351 containerd[1509]: 2025-02-13 15:43:07.481 [INFO][5312] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-ldpr5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--ldpr5-eth0" Feb 13 15:43:07.668280 containerd[1509]: time="2025-02-13T15:43:07.668084224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:07.668280 containerd[1509]: time="2025-02-13T15:43:07.668152957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:07.668280 containerd[1509]: time="2025-02-13T15:43:07.668164851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:07.668784 containerd[1509]: time="2025-02-13T15:43:07.668648524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:07.697622 systemd[1]: Started cri-containerd-7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9.scope - libcontainer container 7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9. Feb 13 15:43:07.710348 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:07.740308 containerd[1509]: time="2025-02-13T15:43:07.740264421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-ldpr5,Uid:34518299-f628-42e5-806b-51d2fa3ce346,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9\"" Feb 13 15:43:07.742138 containerd[1509]: time="2025-02-13T15:43:07.742055990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:43:07.816292 systemd-networkd[1431]: cali84646d3236e: Link UP Feb 13 15:43:07.817420 systemd-networkd[1431]: cali84646d3236e: Gained carrier Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.581 [INFO][5434] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wtt2x-eth0 csi-node-driver- calico-system fb72e478-27f9-4b96-ac63-312fc0de0c3b 616 0 2025-02-13 15:42:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wtt2x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali84646d3236e [] []}} ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.581 [INFO][5434] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.622 [INFO][5450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" HandleID="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Workload="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.636 [INFO][5450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" HandleID="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Workload="localhost-k8s-csi--node--driver--wtt2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000339600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wtt2x", "timestamp":"2025-02-13 15:43:07.622746429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.636 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.636 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.636 [INFO][5450] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.638 [INFO][5450] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.642 [INFO][5450] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.646 [INFO][5450] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.648 [INFO][5450] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.650 [INFO][5450] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.650 [INFO][5450] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.652 [INFO][5450] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119 Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.752 [INFO][5450] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.809 [INFO][5450] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.809 [INFO][5450] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" host="localhost" Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.809 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:07.968654 containerd[1509]: 2025-02-13 15:43:07.809 [INFO][5450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" HandleID="k8s-pod-network.f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Workload="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.814 [INFO][5434] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtt2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb72e478-27f9-4b96-ac63-312fc0de0c3b", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wtt2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84646d3236e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.814 [INFO][5434] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.814 [INFO][5434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84646d3236e ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.817 [INFO][5434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.817 [INFO][5434] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wtt2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb72e478-27f9-4b96-ac63-312fc0de0c3b", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119", Pod:"csi-node-driver-wtt2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84646d3236e", MAC:"d2:7d:09:b8:3e:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:07.970244 containerd[1509]: 2025-02-13 15:43:07.963 [INFO][5434] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119" Namespace="calico-system" Pod="csi-node-driver-wtt2x" WorkloadEndpoint="localhost-k8s-csi--node--driver--wtt2x-eth0" Feb 13 15:43:08.079393 containerd[1509]: time="2025-02-13T15:43:08.079264012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:08.079393 containerd[1509]: time="2025-02-13T15:43:08.079356170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:08.079393 containerd[1509]: time="2025-02-13T15:43:08.079368523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:08.079832 containerd[1509]: time="2025-02-13T15:43:08.079784637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:08.105665 systemd[1]: Started cri-containerd-f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119.scope - libcontainer container f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119. Feb 13 15:43:08.120161 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:08.132508 containerd[1509]: time="2025-02-13T15:43:08.132476105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtt2x,Uid:fb72e478-27f9-4b96-ac63-312fc0de0c3b,Namespace:calico-system,Attempt:8,} returns sandbox id \"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119\"" Feb 13 15:43:08.503875 systemd-networkd[1431]: cali5ac6e5192fa: Link UP Feb 13 15:43:08.504151 systemd-networkd[1431]: cali5ac6e5192fa: Gained carrier Feb 13 15:43:08.686496 systemd-networkd[1431]: calib593fcef869: Gained IPv6LL Feb 13 15:43:08.686932 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:07.810 [INFO][5496] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0 calico-apiserver-57b5c7b97f- calico-apiserver 41ec69cc-3a06-44c4-8295-0752449d5e76 796 0 2025-02-13 15:42:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57b5c7b97f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57b5c7b97f-nvw2j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ac6e5192fa [] []}} ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:07.810 [INFO][5496] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.033 [INFO][5567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" HandleID="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.063 [INFO][5567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" HandleID="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005ab0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57b5c7b97f-nvw2j", "timestamp":"2025-02-13 15:43:08.033624084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.063 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.063 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.063 [INFO][5567] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.104 [INFO][5567] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5567] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.359 [INFO][5567] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.361 [INFO][5567] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.363 [INFO][5567] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.363 [INFO][5567] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.365 [INFO][5567] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5 Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.398 [INFO][5567] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.493 [INFO][5567] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.493 [INFO][5567] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" host="localhost" Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.494 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:08.725184 containerd[1509]: 2025-02-13 15:43:08.494 [INFO][5567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" HandleID="k8s-pod-network.605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Workload="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.500 [INFO][5496] cni-plugin/k8s.go 386: Populated endpoint ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0", GenerateName:"calico-apiserver-57b5c7b97f-", Namespace:"calico-apiserver", SelfLink:"", UID:"41ec69cc-3a06-44c4-8295-0752449d5e76", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b5c7b97f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57b5c7b97f-nvw2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ac6e5192fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.500 [INFO][5496] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.500 [INFO][5496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ac6e5192fa ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.503 [INFO][5496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.503 [INFO][5496] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0", GenerateName:"calico-apiserver-57b5c7b97f-", Namespace:"calico-apiserver", SelfLink:"", UID:"41ec69cc-3a06-44c4-8295-0752449d5e76", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57b5c7b97f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5", Pod:"calico-apiserver-57b5c7b97f-nvw2j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ac6e5192fa", MAC:"26:e5:46:85:92:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:08.726281 containerd[1509]: 2025-02-13 15:43:08.721 [INFO][5496] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5" Namespace="calico-apiserver" Pod="calico-apiserver-57b5c7b97f-nvw2j" WorkloadEndpoint="localhost-k8s-calico--apiserver--57b5c7b97f--nvw2j-eth0" Feb 13 15:43:08.916380 containerd[1509]: time="2025-02-13T15:43:08.916266387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:08.916380 containerd[1509]: time="2025-02-13T15:43:08.916356661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:08.916380 containerd[1509]: time="2025-02-13T15:43:08.916370699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:08.916612 containerd[1509]: time="2025-02-13T15:43:08.916464339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:08.939488 systemd[1]: Started cri-containerd-605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5.scope - libcontainer container 605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5. Feb 13 15:43:08.954181 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:08.979274 containerd[1509]: time="2025-02-13T15:43:08.979222025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57b5c7b97f-nvw2j,Uid:41ec69cc-3a06-44c4-8295-0752449d5e76,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5\"" Feb 13 15:43:09.070524 systemd-networkd[1431]: cali84646d3236e: Gained IPv6LL Feb 13 15:43:09.144575 systemd-networkd[1431]: cali63cd4054bc8: Link UP Feb 13 15:43:09.144852 systemd-networkd[1431]: cali63cd4054bc8: Gained carrier Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:07.969 [INFO][5549] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7lp99-eth0 coredns-668d6bf9bc- kube-system 5fc4b308-ecdc-4011-b8c8-b82b8a22d611 788 0 2025-02-13 15:42:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7lp99 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63cd4054bc8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:07.972 [INFO][5549] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.072 [INFO][5585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" HandleID="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Workload="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.150 [INFO][5585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" HandleID="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Workload="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7lp99", "timestamp":"2025-02-13 15:43:08.072968711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.494 [INFO][5585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.494 [INFO][5585] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.501 [INFO][5585] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.759 [INFO][5585] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.764 [INFO][5585] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.765 [INFO][5585] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.768 [INFO][5585] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.768 [INFO][5585] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.769 [INFO][5585] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:08.849 [INFO][5585] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5585] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5585] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" host="localhost" Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:09.203919 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" HandleID="k8s-pod-network.fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Workload="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.140 [INFO][5549] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7lp99-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5fc4b308-ecdc-4011-b8c8-b82b8a22d611", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7lp99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cd4054bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.140 [INFO][5549] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.140 [INFO][5549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63cd4054bc8 ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.143 [INFO][5549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.143 [INFO][5549] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7lp99-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5fc4b308-ecdc-4011-b8c8-b82b8a22d611", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e", Pod:"coredns-668d6bf9bc-7lp99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63cd4054bc8", MAC:"1e:f4:dc:8b:3a:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.204779 containerd[1509]: 2025-02-13 15:43:09.200 [INFO][5549] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lp99" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7lp99-eth0" Feb 13 15:43:09.346738 systemd-networkd[1431]: caliae80a31cefc: Link UP Feb 13 15:43:09.347256 systemd-networkd[1431]: caliae80a31cefc: Gained carrier Feb 13 15:43:09.356654 containerd[1509]: time="2025-02-13T15:43:09.356558732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:09.356654 containerd[1509]: time="2025-02-13T15:43:09.356606254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:09.356654 containerd[1509]: time="2025-02-13T15:43:09.356616714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.356823 containerd[1509]: time="2025-02-13T15:43:09.356689574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.385445 systemd[1]: Started cri-containerd-fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e.scope - libcontainer container fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e. Feb 13 15:43:09.398219 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:09.422282 containerd[1509]: time="2025-02-13T15:43:09.422245282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lp99,Uid:5fc4b308-ecdc-4011-b8c8-b82b8a22d611,Namespace:kube-system,Attempt:7,} returns sandbox id \"fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e\"" Feb 13 15:43:09.422916 kubelet[2620]: E0213 15:43:09.422897 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:09.424420 containerd[1509]: time="2025-02-13T15:43:09.424370743Z" level=info msg="CreateContainer within sandbox \"fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:07.963 [INFO][5535] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0 calico-kube-controllers-677779d7f9- calico-system 75e9cbc2-59c3-4f8f-a732-f7aed42e478e 792 0 2025-02-13 15:42:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677779d7f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-677779d7f9-hvmnx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae80a31cefc [] []}} ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:07.963 [INFO][5535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:08.078 [INFO][5587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" HandleID="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Workload="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" HandleID="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Workload="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-677779d7f9-hvmnx", "timestamp":"2025-02-13 15:43:08.078706847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.137 [INFO][5587] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.140 [INFO][5587] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.146 [INFO][5587] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.151 [INFO][5587] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.200 [INFO][5587] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.204 [INFO][5587] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.204 [INFO][5587] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.207 [INFO][5587] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.218 [INFO][5587] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.340 [INFO][5587] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.340 [INFO][5587] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" host="localhost" Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.340 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:09.433666 containerd[1509]: 2025-02-13 15:43:09.340 [INFO][5587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" HandleID="k8s-pod-network.8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Workload="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.343 [INFO][5535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0", GenerateName:"calico-kube-controllers-677779d7f9-", Namespace:"calico-system", SelfLink:"", UID:"75e9cbc2-59c3-4f8f-a732-f7aed42e478e", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677779d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-677779d7f9-hvmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae80a31cefc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.343 [INFO][5535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.343 [INFO][5535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae80a31cefc ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.346 [INFO][5535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.348 [INFO][5535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0", GenerateName:"calico-kube-controllers-677779d7f9-", Namespace:"calico-system", SelfLink:"", UID:"75e9cbc2-59c3-4f8f-a732-f7aed42e478e", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677779d7f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e", Pod:"calico-kube-controllers-677779d7f9-hvmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae80a31cefc", MAC:"1a:71:74:b5:b9:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.434369 containerd[1509]: 2025-02-13 15:43:09.430 [INFO][5535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e" Namespace="calico-system" Pod="calico-kube-controllers-677779d7f9-hvmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677779d7f9--hvmnx-eth0" Feb 13 15:43:09.546910 containerd[1509]: time="2025-02-13T15:43:09.545924907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:09.546910 containerd[1509]: time="2025-02-13T15:43:09.546013899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:09.546910 containerd[1509]: time="2025-02-13T15:43:09.546076269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.546910 containerd[1509]: time="2025-02-13T15:43:09.546686767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.563121 systemd-networkd[1431]: cali959aa8c1864: Link UP Feb 13 15:43:09.563967 systemd-networkd[1431]: cali959aa8c1864: Gained carrier Feb 13 15:43:09.576483 systemd[1]: Started cri-containerd-8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e.scope - libcontainer container 8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e. Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:07.962 [INFO][5520] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qh42t-eth0 coredns-668d6bf9bc- kube-system ba5086be-6515-43a3-aac4-e336b6f0df11 794 0 2025-02-13 15:42:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qh42t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali959aa8c1864 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:07.963 [INFO][5520] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:08.081 [INFO][5576] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" HandleID="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Workload="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5576] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" HandleID="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Workload="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qh42t", "timestamp":"2025-02-13 15:43:08.081881875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:08.151 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.341 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.341 [INFO][5576] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.343 [INFO][5576] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.348 [INFO][5576] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.430 [INFO][5576] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.531 [INFO][5576] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.534 [INFO][5576] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.534 [INFO][5576] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.536 [INFO][5576] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.545 [INFO][5576] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.554 [INFO][5576] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.554 [INFO][5576] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" host="localhost" Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.554 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:43:09.580265 containerd[1509]: 2025-02-13 15:43:09.554 [INFO][5576] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" HandleID="k8s-pod-network.e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Workload="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.557 [INFO][5520] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qh42t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba5086be-6515-43a3-aac4-e336b6f0df11", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qh42t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959aa8c1864", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.557 [INFO][5520] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.557 [INFO][5520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali959aa8c1864 ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.564 [INFO][5520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.565 [INFO][5520] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qh42t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba5086be-6515-43a3-aac4-e336b6f0df11", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 42, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc", Pod:"coredns-668d6bf9bc-qh42t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959aa8c1864", MAC:"b6:37:57:4a:21:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:43:09.580797 containerd[1509]: 2025-02-13 15:43:09.573 [INFO][5520] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh42t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh42t-eth0" Feb 13 15:43:09.581282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235785028.mount: Deactivated successfully. Feb 13 15:43:09.585235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913305264.mount: Deactivated successfully. Feb 13 15:43:09.594176 containerd[1509]: time="2025-02-13T15:43:09.593779629Z" level=info msg="CreateContainer within sandbox \"fae3cdfcd0a3d04d89c6db09d452b6781d5806b56accfcfca7447ed6e399a58e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1d8b0b4565de6fb4bf5a50ebfc225b0eff63e67bbf30cce51d4256b14fd5891\"" Feb 13 15:43:09.594889 containerd[1509]: time="2025-02-13T15:43:09.594849803Z" level=info msg="StartContainer for \"d1d8b0b4565de6fb4bf5a50ebfc225b0eff63e67bbf30cce51d4256b14fd5891\"" Feb 13 15:43:09.596231 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:09.620919 containerd[1509]: time="2025-02-13T15:43:09.620632133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:09.620919 containerd[1509]: time="2025-02-13T15:43:09.620672241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:09.620919 containerd[1509]: time="2025-02-13T15:43:09.620681940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.620919 containerd[1509]: time="2025-02-13T15:43:09.620744710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:09.628496 systemd[1]: Started cri-containerd-d1d8b0b4565de6fb4bf5a50ebfc225b0eff63e67bbf30cce51d4256b14fd5891.scope - libcontainer container d1d8b0b4565de6fb4bf5a50ebfc225b0eff63e67bbf30cce51d4256b14fd5891. Feb 13 15:43:09.635901 containerd[1509]: time="2025-02-13T15:43:09.635845370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677779d7f9-hvmnx,Uid:75e9cbc2-59c3-4f8f-a732-f7aed42e478e,Namespace:calico-system,Attempt:7,} returns sandbox id \"8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e\"" Feb 13 15:43:09.644459 systemd[1]: Started cri-containerd-e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc.scope - libcontainer container e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc. Feb 13 15:43:09.657220 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:43:09.683716 containerd[1509]: time="2025-02-13T15:43:09.683672197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh42t,Uid:ba5086be-6515-43a3-aac4-e336b6f0df11,Namespace:kube-system,Attempt:6,} returns sandbox id \"e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc\"" Feb 13 15:43:09.685297 kubelet[2620]: E0213 15:43:09.684773 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:09.691963 containerd[1509]: time="2025-02-13T15:43:09.691789210Z" level=info msg="CreateContainer within sandbox \"e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:43:09.812509 containerd[1509]: time="2025-02-13T15:43:09.812358553Z" level=info msg="StartContainer for \"d1d8b0b4565de6fb4bf5a50ebfc225b0eff63e67bbf30cce51d4256b14fd5891\" returns successfully" Feb 13 15:43:09.853732 containerd[1509]: time="2025-02-13T15:43:09.853652334Z" level=info msg="CreateContainer within sandbox \"e2231b511a9e5a54541a22bb743d1a6cb27c52b13f04a97d9c76cc7a184d2cfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5ef689904e80351c04d3bc84453abdaaa5b8d9e4fc499e37accb90ef79ad744\"" Feb 13 15:43:09.854255 containerd[1509]: time="2025-02-13T15:43:09.854224027Z" level=info msg="StartContainer for \"e5ef689904e80351c04d3bc84453abdaaa5b8d9e4fc499e37accb90ef79ad744\"" Feb 13 15:43:09.883634 systemd[1]: Started cri-containerd-e5ef689904e80351c04d3bc84453abdaaa5b8d9e4fc499e37accb90ef79ad744.scope - libcontainer container e5ef689904e80351c04d3bc84453abdaaa5b8d9e4fc499e37accb90ef79ad744. Feb 13 15:43:09.899069 kubelet[2620]: E0213 15:43:09.899026 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:09.921108 containerd[1509]: time="2025-02-13T15:43:09.920441362Z" level=info msg="StartContainer for \"e5ef689904e80351c04d3bc84453abdaaa5b8d9e4fc499e37accb90ef79ad744\" returns successfully" Feb 13 15:43:09.921297 kubelet[2620]: I0213 15:43:09.920797 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lp99" podStartSLOduration=49.920773342 podStartE2EDuration="49.920773342s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:43:09.91892183 +0000 UTC m=+55.554685955" watchObservedRunningTime="2025-02-13 15:43:09.920773342 +0000 UTC m=+55.556537467" Feb 13 15:43:10.094687 systemd-networkd[1431]: cali5ac6e5192fa: Gained IPv6LL Feb 13 15:43:10.606472 systemd-networkd[1431]: caliae80a31cefc: Gained IPv6LL Feb 13 15:43:10.904984 kubelet[2620]: E0213 15:43:10.904951 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:10.905554 kubelet[2620]: E0213 15:43:10.905385 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:10.926556 systemd-networkd[1431]: cali63cd4054bc8: Gained IPv6LL Feb 13 15:43:11.004720 kubelet[2620]: I0213 15:43:11.004652 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qh42t" podStartSLOduration=51.004630987 podStartE2EDuration="51.004630987s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:43:10.989364387 +0000 UTC m=+56.625128532" watchObservedRunningTime="2025-02-13 15:43:11.004630987 +0000 UTC m=+56.640395112" Feb 13 15:43:11.311073 systemd-networkd[1431]: cali959aa8c1864: Gained IPv6LL Feb 13 15:43:11.907274 kubelet[2620]: E0213 15:43:11.907243 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:11.908537 kubelet[2620]: E0213 15:43:11.907296 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:12.430688 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:41220.service - OpenSSH per-connection server daemon (10.0.0.1:41220). Feb 13 15:43:12.475376 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 41220 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:12.477385 sshd-session[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:12.481975 systemd-logind[1495]: New session 15 of user core. Feb 13 15:43:12.488463 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:43:12.651149 sshd[5969]: Connection closed by 10.0.0.1 port 41220 Feb 13 15:43:12.651564 sshd-session[5957]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:12.656050 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:41220.service: Deactivated successfully. Feb 13 15:43:12.658476 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:43:12.659201 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:43:12.660111 systemd-logind[1495]: Removed session 15. Feb 13 15:43:12.786133 containerd[1509]: time="2025-02-13T15:43:12.785960259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:12.836121 containerd[1509]: time="2025-02-13T15:43:12.836052939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 15:43:12.866613 containerd[1509]: time="2025-02-13T15:43:12.866536562Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:12.870657 containerd[1509]: time="2025-02-13T15:43:12.870604674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:12.871554 containerd[1509]: time="2025-02-13T15:43:12.871491369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.129364384s" Feb 13 15:43:12.871554 containerd[1509]: time="2025-02-13T15:43:12.871546162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:43:12.872856 containerd[1509]: time="2025-02-13T15:43:12.872823873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:43:12.874094 containerd[1509]: time="2025-02-13T15:43:12.874049375Z" level=info msg="CreateContainer within sandbox \"7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:43:12.907201 containerd[1509]: time="2025-02-13T15:43:12.907134195Z" level=info msg="CreateContainer within sandbox \"7df5f0c0edba01129a5f7d4dfb20d227553ff5785953104c8e548639664b5dd9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3d6ae54470598f5b68d819d3c53d2677b1dc39d37e3d24d9e29ad26e747394b1\"" Feb 13 15:43:12.907876 containerd[1509]: time="2025-02-13T15:43:12.907819312Z" level=info msg="StartContainer for \"3d6ae54470598f5b68d819d3c53d2677b1dc39d37e3d24d9e29ad26e747394b1\"" Feb 13 15:43:12.913191 kubelet[2620]: E0213 15:43:12.913159 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:12.914045 kubelet[2620]: E0213 15:43:12.913215 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:12.948499 systemd[1]: Started cri-containerd-3d6ae54470598f5b68d819d3c53d2677b1dc39d37e3d24d9e29ad26e747394b1.scope - libcontainer container 3d6ae54470598f5b68d819d3c53d2677b1dc39d37e3d24d9e29ad26e747394b1. Feb 13 15:43:12.991778 containerd[1509]: time="2025-02-13T15:43:12.991733915Z" level=info msg="StartContainer for \"3d6ae54470598f5b68d819d3c53d2677b1dc39d37e3d24d9e29ad26e747394b1\" returns successfully" Feb 13 15:43:14.102296 kubelet[2620]: I0213 15:43:14.102194 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b5c7b97f-ldpr5" podStartSLOduration=36.97113921 podStartE2EDuration="42.102173849s" podCreationTimestamp="2025-02-13 15:42:32 +0000 UTC" firstStartedPulling="2025-02-13 15:43:07.741577776 +0000 UTC m=+53.377341901" lastFinishedPulling="2025-02-13 15:43:12.872612415 +0000 UTC m=+58.508376540" observedRunningTime="2025-02-13 15:43:14.101645235 +0000 UTC m=+59.737409370" watchObservedRunningTime="2025-02-13 15:43:14.102173849 +0000 UTC m=+59.737937974" Feb 13 15:43:14.454475 containerd[1509]: time="2025-02-13T15:43:14.454420069Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:14.455026 containerd[1509]: time="2025-02-13T15:43:14.454544413Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:43:14.455026 containerd[1509]: time="2025-02-13T15:43:14.454593795Z" level=info msg="StopPodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:43:14.455122 containerd[1509]: time="2025-02-13T15:43:14.455023855Z" level=info msg="RemovePodSandbox for \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:14.488008 containerd[1509]: time="2025-02-13T15:43:14.487930540Z" level=info msg="Forcibly stopping sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\"" Feb 13 15:43:14.488164 containerd[1509]: time="2025-02-13T15:43:14.488093556Z" level=info msg="TearDown network for sandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" successfully" Feb 13 15:43:14.725813 containerd[1509]: time="2025-02-13T15:43:14.725662076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.725813 containerd[1509]: time="2025-02-13T15:43:14.725729633Z" level=info msg="RemovePodSandbox \"a2fb9ebc88ad998ad5efe69fb7c843de6ddefbf849e468f0094c2fad12ff7878\" returns successfully" Feb 13 15:43:14.726250 containerd[1509]: time="2025-02-13T15:43:14.726207171Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:14.726382 containerd[1509]: time="2025-02-13T15:43:14.726350251Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:43:14.726382 containerd[1509]: time="2025-02-13T15:43:14.726370709Z" level=info msg="StopPodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:43:14.726664 containerd[1509]: time="2025-02-13T15:43:14.726641327Z" level=info msg="RemovePodSandbox for \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:14.726720 containerd[1509]: time="2025-02-13T15:43:14.726668229Z" level=info msg="Forcibly stopping sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\"" Feb 13 15:43:14.726814 containerd[1509]: time="2025-02-13T15:43:14.726748770Z" level=info msg="TearDown network for sandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" successfully" Feb 13 15:43:14.970019 containerd[1509]: time="2025-02-13T15:43:14.969966965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.970019 containerd[1509]: time="2025-02-13T15:43:14.970036597Z" level=info msg="RemovePodSandbox \"546d38136e99b918502bc8aae08820b404500f3e0bdd1089d2509020fd9864ca\" returns successfully" Feb 13 15:43:14.970624 containerd[1509]: time="2025-02-13T15:43:14.970582543Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:14.970777 containerd[1509]: time="2025-02-13T15:43:14.970745991Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:43:14.970777 containerd[1509]: time="2025-02-13T15:43:14.970770617Z" level=info msg="StopPodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:43:14.971245 containerd[1509]: time="2025-02-13T15:43:14.971219892Z" level=info msg="RemovePodSandbox for \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:14.971306 containerd[1509]: time="2025-02-13T15:43:14.971254086Z" level=info msg="Forcibly stopping sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\"" Feb 13 15:43:14.971433 containerd[1509]: time="2025-02-13T15:43:14.971382338Z" level=info msg="TearDown network for sandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" successfully" Feb 13 15:43:14.979299 containerd[1509]: time="2025-02-13T15:43:14.979172871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.979299 containerd[1509]: time="2025-02-13T15:43:14.979233545Z" level=info msg="RemovePodSandbox \"644646abfff0d7a5455f4dfe112b80d4f619d7685017f5fe9c828ac2ac319c09\" returns successfully" Feb 13 15:43:14.979813 containerd[1509]: time="2025-02-13T15:43:14.979762340Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:14.980113 containerd[1509]: time="2025-02-13T15:43:14.980079236Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:43:14.980144 containerd[1509]: time="2025-02-13T15:43:14.980126385Z" level=info msg="StopPodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:43:14.980516 containerd[1509]: time="2025-02-13T15:43:14.980482334Z" level=info msg="RemovePodSandbox for \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:14.980516 containerd[1509]: time="2025-02-13T15:43:14.980513152Z" level=info msg="Forcibly stopping sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\"" Feb 13 15:43:14.980713 containerd[1509]: time="2025-02-13T15:43:14.980600015Z" level=info msg="TearDown network for sandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" successfully" Feb 13 15:43:14.985416 containerd[1509]: time="2025-02-13T15:43:14.985360630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.985416 containerd[1509]: time="2025-02-13T15:43:14.985417326Z" level=info msg="RemovePodSandbox \"287303b9f3ba29c4cb9336bbc3b0b4923d17817b148c821a13e2cc2ba3bf9e9d\" returns successfully" Feb 13 15:43:14.985823 containerd[1509]: time="2025-02-13T15:43:14.985778335Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:14.985978 containerd[1509]: time="2025-02-13T15:43:14.985923298Z" level=info msg="TearDown network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" successfully" Feb 13 15:43:14.985978 containerd[1509]: time="2025-02-13T15:43:14.985974013Z" level=info msg="StopPodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" returns successfully" Feb 13 15:43:14.986364 containerd[1509]: time="2025-02-13T15:43:14.986335053Z" level=info msg="RemovePodSandbox for \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:14.986429 containerd[1509]: time="2025-02-13T15:43:14.986366101Z" level=info msg="Forcibly stopping sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\"" Feb 13 15:43:14.986501 containerd[1509]: time="2025-02-13T15:43:14.986449287Z" level=info msg="TearDown network for sandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" successfully" Feb 13 15:43:14.990734 containerd[1509]: time="2025-02-13T15:43:14.990690214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.990781 containerd[1509]: time="2025-02-13T15:43:14.990750197Z" level=info msg="RemovePodSandbox \"fae5ecbf93c3c8875a62f7347fc71f4591aab439db295c8f3bb7a6d5b270d858\" returns successfully" Feb 13 15:43:14.991167 containerd[1509]: time="2025-02-13T15:43:14.991134991Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" Feb 13 15:43:14.991286 containerd[1509]: time="2025-02-13T15:43:14.991257792Z" level=info msg="TearDown network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" successfully" Feb 13 15:43:14.991286 containerd[1509]: time="2025-02-13T15:43:14.991280375Z" level=info msg="StopPodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" returns successfully" Feb 13 15:43:14.991597 containerd[1509]: time="2025-02-13T15:43:14.991559330Z" level=info msg="RemovePodSandbox for \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" Feb 13 15:43:14.991597 containerd[1509]: time="2025-02-13T15:43:14.991591279Z" level=info msg="Forcibly stopping sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\"" Feb 13 15:43:14.991733 containerd[1509]: time="2025-02-13T15:43:14.991680497Z" level=info msg="TearDown network for sandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" successfully" Feb 13 15:43:14.996072 containerd[1509]: time="2025-02-13T15:43:14.996031060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:14.996138 containerd[1509]: time="2025-02-13T15:43:14.996094489Z" level=info msg="RemovePodSandbox \"2d7e928706041cc7edb66b1a0a963ce5ef58ae4e39a6061c17b1836d5369a03a\" returns successfully" Feb 13 15:43:14.996444 containerd[1509]: time="2025-02-13T15:43:14.996411435Z" level=info msg="StopPodSandbox for \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\"" Feb 13 15:43:14.996595 containerd[1509]: time="2025-02-13T15:43:14.996521452Z" level=info msg="TearDown network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" successfully" Feb 13 15:43:14.996595 containerd[1509]: time="2025-02-13T15:43:14.996536390Z" level=info msg="StopPodSandbox for \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" returns successfully" Feb 13 15:43:14.996816 containerd[1509]: time="2025-02-13T15:43:14.996786031Z" level=info msg="RemovePodSandbox for \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\"" Feb 13 15:43:14.996869 containerd[1509]: time="2025-02-13T15:43:14.996814163Z" level=info msg="Forcibly stopping sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\"" Feb 13 15:43:14.996967 containerd[1509]: time="2025-02-13T15:43:14.996902359Z" level=info msg="TearDown network for sandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" successfully" Feb 13 15:43:15.001346 containerd[1509]: time="2025-02-13T15:43:15.001301463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.001400 containerd[1509]: time="2025-02-13T15:43:15.001365574Z" level=info msg="RemovePodSandbox \"faec03a3d97499bb90eddc52c7f711eb955f8b03763f07c526ef9c9205c21ec6\" returns successfully" Feb 13 15:43:15.001992 containerd[1509]: time="2025-02-13T15:43:15.001807997Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:43:15.001992 containerd[1509]: time="2025-02-13T15:43:15.001917273Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:43:15.001992 containerd[1509]: time="2025-02-13T15:43:15.001928724Z" level=info msg="StopPodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:43:15.002241 containerd[1509]: time="2025-02-13T15:43:15.002206727Z" level=info msg="RemovePodSandbox for \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:43:15.002327 containerd[1509]: time="2025-02-13T15:43:15.002243927Z" level=info msg="Forcibly stopping sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\"" Feb 13 15:43:15.002415 containerd[1509]: time="2025-02-13T15:43:15.002368472Z" level=info msg="TearDown network for sandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" successfully" Feb 13 15:43:15.006332 containerd[1509]: time="2025-02-13T15:43:15.006267761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.006420 containerd[1509]: time="2025-02-13T15:43:15.006344495Z" level=info msg="RemovePodSandbox \"63de70b5341227c9f6446ddeab4a7a6a0530a0254fe49f379d89b993ba9726b8\" returns successfully" Feb 13 15:43:15.006692 containerd[1509]: time="2025-02-13T15:43:15.006655359Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:43:15.006790 containerd[1509]: time="2025-02-13T15:43:15.006764215Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:43:15.006790 containerd[1509]: time="2025-02-13T15:43:15.006784603Z" level=info msg="StopPodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:43:15.007073 containerd[1509]: time="2025-02-13T15:43:15.007033051Z" level=info msg="RemovePodSandbox for \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:43:15.007073 containerd[1509]: time="2025-02-13T15:43:15.007063307Z" level=info msg="Forcibly stopping sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\"" Feb 13 15:43:15.007197 containerd[1509]: time="2025-02-13T15:43:15.007153027Z" level=info msg="TearDown network for sandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" successfully" Feb 13 15:43:15.011537 containerd[1509]: time="2025-02-13T15:43:15.011489909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.011629 containerd[1509]: time="2025-02-13T15:43:15.011552156Z" level=info msg="RemovePodSandbox \"feff789ccf53f41480eddc0d279af3167359969d5a38c45b67b5cd068cb8e8b4\" returns successfully" Feb 13 15:43:15.011888 containerd[1509]: time="2025-02-13T15:43:15.011853052Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:43:15.011997 containerd[1509]: time="2025-02-13T15:43:15.011977096Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:43:15.012028 containerd[1509]: time="2025-02-13T15:43:15.011994568Z" level=info msg="StopPodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:43:15.012236 containerd[1509]: time="2025-02-13T15:43:15.012203532Z" level=info msg="RemovePodSandbox for \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:43:15.012236 containerd[1509]: time="2025-02-13T15:43:15.012231334Z" level=info msg="Forcibly stopping sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\"" Feb 13 15:43:15.012381 containerd[1509]: time="2025-02-13T15:43:15.012332755Z" level=info msg="TearDown network for sandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" successfully" Feb 13 15:43:15.015916 containerd[1509]: time="2025-02-13T15:43:15.015876415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.016051 containerd[1509]: time="2025-02-13T15:43:15.015928122Z" level=info msg="RemovePodSandbox \"1295efd05058bcc18872799a990e9bbfd002fb303802885a3d791b723bc1a145\" returns successfully" Feb 13 15:43:15.016353 containerd[1509]: time="2025-02-13T15:43:15.016301234Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:43:15.016455 containerd[1509]: time="2025-02-13T15:43:15.016438413Z" level=info msg="TearDown network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" successfully" Feb 13 15:43:15.016455 containerd[1509]: time="2025-02-13T15:43:15.016452469Z" level=info msg="StopPodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" returns successfully" Feb 13 15:43:15.016687 containerd[1509]: time="2025-02-13T15:43:15.016659810Z" level=info msg="RemovePodSandbox for \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:43:15.016733 containerd[1509]: time="2025-02-13T15:43:15.016686009Z" level=info msg="Forcibly stopping sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\"" Feb 13 15:43:15.016797 containerd[1509]: time="2025-02-13T15:43:15.016758535Z" level=info msg="TearDown network for sandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" successfully" Feb 13 15:43:15.020597 containerd[1509]: time="2025-02-13T15:43:15.020552055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.020680 containerd[1509]: time="2025-02-13T15:43:15.020603140Z" level=info msg="RemovePodSandbox \"90b8ab4f028b602596f433bd28aed68b8709a472f8a2bda5d3b66ac9f25fb555\" returns successfully" Feb 13 15:43:15.020908 containerd[1509]: time="2025-02-13T15:43:15.020887116Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" Feb 13 15:43:15.021002 containerd[1509]: time="2025-02-13T15:43:15.020982344Z" level=info msg="TearDown network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" successfully" Feb 13 15:43:15.021002 containerd[1509]: time="2025-02-13T15:43:15.020995279Z" level=info msg="StopPodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" returns successfully" Feb 13 15:43:15.021378 containerd[1509]: time="2025-02-13T15:43:15.021350107Z" level=info msg="RemovePodSandbox for \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" Feb 13 15:43:15.021378 containerd[1509]: time="2025-02-13T15:43:15.021375735Z" level=info msg="Forcibly stopping sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\"" Feb 13 15:43:15.021506 containerd[1509]: time="2025-02-13T15:43:15.021462749Z" level=info msg="TearDown network for sandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" successfully" Feb 13 15:43:15.025422 containerd[1509]: time="2025-02-13T15:43:15.025381023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.025526 containerd[1509]: time="2025-02-13T15:43:15.025457547Z" level=info msg="RemovePodSandbox \"9f760e5d360ffa42846422adf5409db58d11ed7c42beea514b8e7c4606e8750c\" returns successfully" Feb 13 15:43:15.026269 containerd[1509]: time="2025-02-13T15:43:15.026235432Z" level=info msg="StopPodSandbox for \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\"" Feb 13 15:43:15.026424 containerd[1509]: time="2025-02-13T15:43:15.026362470Z" level=info msg="TearDown network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" successfully" Feb 13 15:43:15.026424 containerd[1509]: time="2025-02-13T15:43:15.026418646Z" level=info msg="StopPodSandbox for \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" returns successfully" Feb 13 15:43:15.026689 containerd[1509]: time="2025-02-13T15:43:15.026652376Z" level=info msg="RemovePodSandbox for \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\"" Feb 13 15:43:15.026689 containerd[1509]: time="2025-02-13T15:43:15.026678486Z" level=info msg="Forcibly stopping sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\"" Feb 13 15:43:15.026820 containerd[1509]: time="2025-02-13T15:43:15.026776369Z" level=info msg="TearDown network for sandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" successfully" Feb 13 15:43:15.030587 containerd[1509]: time="2025-02-13T15:43:15.030534422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.030684 containerd[1509]: time="2025-02-13T15:43:15.030592902Z" level=info msg="RemovePodSandbox \"a740361b1a7fa5a043bed8d60e5dc1d81895eed388a62986738b88754af12f81\" returns successfully" Feb 13 15:43:15.030917 containerd[1509]: time="2025-02-13T15:43:15.030880113Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:15.031020 containerd[1509]: time="2025-02-13T15:43:15.030993596Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:43:15.031020 containerd[1509]: time="2025-02-13T15:43:15.031013043Z" level=info msg="StopPodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:43:15.031418 containerd[1509]: time="2025-02-13T15:43:15.031389712Z" level=info msg="RemovePodSandbox for \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:15.031482 containerd[1509]: time="2025-02-13T15:43:15.031422724Z" level=info msg="Forcibly stopping sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\"" Feb 13 15:43:15.031552 containerd[1509]: time="2025-02-13T15:43:15.031508185Z" level=info msg="TearDown network for sandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" successfully" Feb 13 15:43:15.035284 containerd[1509]: time="2025-02-13T15:43:15.035249226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.035455 containerd[1509]: time="2025-02-13T15:43:15.035298459Z" level=info msg="RemovePodSandbox \"b43f928bb6683438b5e6074782dbd9523cfcbb0c890ad49d4dc585d19d915964\" returns successfully" Feb 13 15:43:15.035692 containerd[1509]: time="2025-02-13T15:43:15.035662223Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:15.035796 containerd[1509]: time="2025-02-13T15:43:15.035775757Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:43:15.035850 containerd[1509]: time="2025-02-13T15:43:15.035795494Z" level=info msg="StopPodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:43:15.036121 containerd[1509]: time="2025-02-13T15:43:15.036090870Z" level=info msg="RemovePodSandbox for \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:15.036169 containerd[1509]: time="2025-02-13T15:43:15.036120125Z" level=info msg="Forcibly stopping sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\"" Feb 13 15:43:15.036247 containerd[1509]: time="2025-02-13T15:43:15.036198563Z" level=info msg="TearDown network for sandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" successfully" Feb 13 15:43:15.040043 containerd[1509]: time="2025-02-13T15:43:15.039997052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.040127 containerd[1509]: time="2025-02-13T15:43:15.040050914Z" level=info msg="RemovePodSandbox \"599d7d0de632b0c000fcd71b5d2e1eb7294ddc4bac05e9a76f7f769f32e0c34f\" returns successfully" Feb 13 15:43:15.040467 containerd[1509]: time="2025-02-13T15:43:15.040439755Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:15.040563 containerd[1509]: time="2025-02-13T15:43:15.040538190Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:43:15.040563 containerd[1509]: time="2025-02-13T15:43:15.040552577Z" level=info msg="StopPodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:43:15.040878 containerd[1509]: time="2025-02-13T15:43:15.040849296Z" level=info msg="RemovePodSandbox for \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:15.040974 containerd[1509]: time="2025-02-13T15:43:15.040880644Z" level=info msg="Forcibly stopping sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\"" Feb 13 15:43:15.041029 containerd[1509]: time="2025-02-13T15:43:15.040985131Z" level=info msg="TearDown network for sandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" successfully" Feb 13 15:43:15.044845 containerd[1509]: time="2025-02-13T15:43:15.044807024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.044897 containerd[1509]: time="2025-02-13T15:43:15.044870484Z" level=info msg="RemovePodSandbox \"d5abb2d33746771378c93d866c611ef30eb371864b17d89913e86b97df7db048\" returns successfully" Feb 13 15:43:15.045294 containerd[1509]: time="2025-02-13T15:43:15.045258775Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:15.045408 containerd[1509]: time="2025-02-13T15:43:15.045375885Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:43:15.045408 containerd[1509]: time="2025-02-13T15:43:15.045387427Z" level=info msg="StopPodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:43:15.045713 containerd[1509]: time="2025-02-13T15:43:15.045683394Z" level=info msg="RemovePodSandbox for \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:15.045768 containerd[1509]: time="2025-02-13T15:43:15.045716757Z" level=info msg="Forcibly stopping sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\"" Feb 13 15:43:15.045859 containerd[1509]: time="2025-02-13T15:43:15.045811996Z" level=info msg="TearDown network for sandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" successfully" Feb 13 15:43:15.049472 containerd[1509]: time="2025-02-13T15:43:15.049443290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.049536 containerd[1509]: time="2025-02-13T15:43:15.049489798Z" level=info msg="RemovePodSandbox \"b15bcfc20a61fb0e01de46651d720c595389bcc1129b535d52129c6238804d17\" returns successfully" Feb 13 15:43:15.049955 containerd[1509]: time="2025-02-13T15:43:15.049770106Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:15.049955 containerd[1509]: time="2025-02-13T15:43:15.049868671Z" level=info msg="TearDown network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" successfully" Feb 13 15:43:15.049955 containerd[1509]: time="2025-02-13T15:43:15.049879581Z" level=info msg="StopPodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" returns successfully" Feb 13 15:43:15.050176 containerd[1509]: time="2025-02-13T15:43:15.050144370Z" level=info msg="RemovePodSandbox for \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:15.050221 containerd[1509]: time="2025-02-13T15:43:15.050183624Z" level=info msg="Forcibly stopping sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\"" Feb 13 15:43:15.050358 containerd[1509]: time="2025-02-13T15:43:15.050299522Z" level=info msg="TearDown network for sandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" successfully" Feb 13 15:43:15.053978 containerd[1509]: time="2025-02-13T15:43:15.053917111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.053978 containerd[1509]: time="2025-02-13T15:43:15.053976392Z" level=info msg="RemovePodSandbox \"512ee3f12ca709aadaf51aa16e1414866975bd4c04abe9a62768486763ee15bd\" returns successfully" Feb 13 15:43:15.054374 containerd[1509]: time="2025-02-13T15:43:15.054313006Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" Feb 13 15:43:15.054460 containerd[1509]: time="2025-02-13T15:43:15.054438703Z" level=info msg="TearDown network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" successfully" Feb 13 15:43:15.054498 containerd[1509]: time="2025-02-13T15:43:15.054457348Z" level=info msg="StopPodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" returns successfully" Feb 13 15:43:15.054772 containerd[1509]: time="2025-02-13T15:43:15.054749066Z" level=info msg="RemovePodSandbox for \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" Feb 13 15:43:15.054827 containerd[1509]: time="2025-02-13T15:43:15.054774685Z" level=info msg="Forcibly stopping sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\"" Feb 13 15:43:15.054895 containerd[1509]: time="2025-02-13T15:43:15.054859495Z" level=info msg="TearDown network for sandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" successfully" Feb 13 15:43:15.058523 containerd[1509]: time="2025-02-13T15:43:15.058499425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.058594 containerd[1509]: time="2025-02-13T15:43:15.058548217Z" level=info msg="RemovePodSandbox \"eec83d90b95d014673da3a61dcf401389f2d10c2115274f9ed956264123e72f8\" returns successfully" Feb 13 15:43:15.058820 containerd[1509]: time="2025-02-13T15:43:15.058789922Z" level=info msg="StopPodSandbox for \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\"" Feb 13 15:43:15.058939 containerd[1509]: time="2025-02-13T15:43:15.058904146Z" level=info msg="TearDown network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" successfully" Feb 13 15:43:15.058939 containerd[1509]: time="2025-02-13T15:43:15.058922131Z" level=info msg="StopPodSandbox for \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" returns successfully" Feb 13 15:43:15.059259 containerd[1509]: time="2025-02-13T15:43:15.059230140Z" level=info msg="RemovePodSandbox for \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\"" Feb 13 15:43:15.059259 containerd[1509]: time="2025-02-13T15:43:15.059253314Z" level=info msg="Forcibly stopping sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\"" Feb 13 15:43:15.059378 containerd[1509]: time="2025-02-13T15:43:15.059336421Z" level=info msg="TearDown network for sandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" successfully" Feb 13 15:43:15.063506 containerd[1509]: time="2025-02-13T15:43:15.063456244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.063605 containerd[1509]: time="2025-02-13T15:43:15.063544620Z" level=info msg="RemovePodSandbox \"edd5c7b2681ad98c5ef61ed334040df1aff648793a7c354dc92b618c67d5766a\" returns successfully" Feb 13 15:43:15.064677 containerd[1509]: time="2025-02-13T15:43:15.064636095Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:15.064785 containerd[1509]: time="2025-02-13T15:43:15.064744138Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:43:15.064785 containerd[1509]: time="2025-02-13T15:43:15.064764236Z" level=info msg="StopPodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:43:15.067157 containerd[1509]: time="2025-02-13T15:43:15.065246523Z" level=info msg="RemovePodSandbox for \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:15.067157 containerd[1509]: time="2025-02-13T15:43:15.065278043Z" level=info msg="Forcibly stopping sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\"" Feb 13 15:43:15.067157 containerd[1509]: time="2025-02-13T15:43:15.065387128Z" level=info msg="TearDown network for sandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" successfully" Feb 13 15:43:15.069004 containerd[1509]: time="2025-02-13T15:43:15.068960384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.069004 containerd[1509]: time="2025-02-13T15:43:15.068997223Z" level=info msg="RemovePodSandbox \"a189b8ea5b78c4c31b9aabce7c905890e60a9d2ba9e7217ff8f15519594f94fa\" returns successfully" Feb 13 15:43:15.069377 containerd[1509]: time="2025-02-13T15:43:15.069354836Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:15.069486 containerd[1509]: time="2025-02-13T15:43:15.069436970Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:43:15.069486 containerd[1509]: time="2025-02-13T15:43:15.069481323Z" level=info msg="StopPodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:43:15.069708 containerd[1509]: time="2025-02-13T15:43:15.069687211Z" level=info msg="RemovePodSandbox for \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:15.069742 containerd[1509]: time="2025-02-13T15:43:15.069709012Z" level=info msg="Forcibly stopping sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\"" Feb 13 15:43:15.069811 containerd[1509]: time="2025-02-13T15:43:15.069774516Z" level=info msg="TearDown network for sandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" successfully" Feb 13 15:43:15.073462 containerd[1509]: time="2025-02-13T15:43:15.073426629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.073462 containerd[1509]: time="2025-02-13T15:43:15.073460433Z" level=info msg="RemovePodSandbox \"e43611b8f6e6e3124c724679e737ded77630ac1cbd3b1739bd90ebc056c8904b\" returns successfully" Feb 13 15:43:15.073754 containerd[1509]: time="2025-02-13T15:43:15.073733397Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:15.073845 containerd[1509]: time="2025-02-13T15:43:15.073811904Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:43:15.073845 containerd[1509]: time="2025-02-13T15:43:15.073840287Z" level=info msg="StopPodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:43:15.074102 containerd[1509]: time="2025-02-13T15:43:15.074074919Z" level=info msg="RemovePodSandbox for \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:15.074102 containerd[1509]: time="2025-02-13T15:43:15.074099145Z" level=info msg="Forcibly stopping sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\"" Feb 13 15:43:15.074197 containerd[1509]: time="2025-02-13T15:43:15.074166502Z" level=info msg="TearDown network for sandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" successfully" Feb 13 15:43:15.077774 containerd[1509]: time="2025-02-13T15:43:15.077743935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.077829 containerd[1509]: time="2025-02-13T15:43:15.077776065Z" level=info msg="RemovePodSandbox \"b3c1a6466d12fd39267ffd0a7c07bac33bf7aa1c3178f04ae601d9b29f13b1d8\" returns successfully" Feb 13 15:43:15.078081 containerd[1509]: time="2025-02-13T15:43:15.078047095Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:15.078162 containerd[1509]: time="2025-02-13T15:43:15.078145941Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:43:15.078192 containerd[1509]: time="2025-02-13T15:43:15.078159807Z" level=info msg="StopPodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:43:15.078431 containerd[1509]: time="2025-02-13T15:43:15.078410760Z" level=info msg="RemovePodSandbox for \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:15.078470 containerd[1509]: time="2025-02-13T15:43:15.078432020Z" level=info msg="Forcibly stopping sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\"" Feb 13 15:43:15.078537 containerd[1509]: time="2025-02-13T15:43:15.078503775Z" level=info msg="TearDown network for sandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" successfully" Feb 13 15:43:15.082525 containerd[1509]: time="2025-02-13T15:43:15.082461684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.082645 containerd[1509]: time="2025-02-13T15:43:15.082528480Z" level=info msg="RemovePodSandbox \"6cc023e0581aaf1c3fe8f2a065b899981b51b5a11d9cb8209bb26ba2757b4186\" returns successfully" Feb 13 15:43:15.082923 containerd[1509]: time="2025-02-13T15:43:15.082897775Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:15.083059 containerd[1509]: time="2025-02-13T15:43:15.083038750Z" level=info msg="TearDown network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" successfully" Feb 13 15:43:15.083096 containerd[1509]: time="2025-02-13T15:43:15.083059299Z" level=info msg="StopPodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" returns successfully" Feb 13 15:43:15.083348 containerd[1509]: time="2025-02-13T15:43:15.083285845Z" level=info msg="RemovePodSandbox for \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:15.083348 containerd[1509]: time="2025-02-13T15:43:15.083305893Z" level=info msg="Forcibly stopping sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\"" Feb 13 15:43:15.083533 containerd[1509]: time="2025-02-13T15:43:15.083482816Z" level=info msg="TearDown network for sandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" successfully" Feb 13 15:43:15.087016 containerd[1509]: time="2025-02-13T15:43:15.086989616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.087102 containerd[1509]: time="2025-02-13T15:43:15.087031955Z" level=info msg="RemovePodSandbox \"037c9cbbc3ed6eb3ac635d18129f9dd0830b4e79c30be289a747dde086016847\" returns successfully" Feb 13 15:43:15.087562 containerd[1509]: time="2025-02-13T15:43:15.087366745Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" Feb 13 15:43:15.087562 containerd[1509]: time="2025-02-13T15:43:15.087481421Z" level=info msg="TearDown network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" successfully" Feb 13 15:43:15.087562 containerd[1509]: time="2025-02-13T15:43:15.087496900Z" level=info msg="StopPodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" returns successfully" Feb 13 15:43:15.087727 containerd[1509]: time="2025-02-13T15:43:15.087705323Z" level=info msg="RemovePodSandbox for \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" Feb 13 15:43:15.087727 containerd[1509]: time="2025-02-13T15:43:15.087726653Z" level=info msg="Forcibly stopping sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\"" Feb 13 15:43:15.087816 containerd[1509]: time="2025-02-13T15:43:15.087786946Z" level=info msg="TearDown network for sandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" successfully" Feb 13 15:43:15.091384 containerd[1509]: time="2025-02-13T15:43:15.091363097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.091457 containerd[1509]: time="2025-02-13T15:43:15.091396249Z" level=info msg="RemovePodSandbox \"a07674047bbdb447466e963fd95cef5b60d2dd96f27e8f77eb3616e7c3c9135f\" returns successfully" Feb 13 15:43:15.091700 containerd[1509]: time="2025-02-13T15:43:15.091648194Z" level=info msg="StopPodSandbox for \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\"" Feb 13 15:43:15.091748 containerd[1509]: time="2025-02-13T15:43:15.091726060Z" level=info msg="TearDown network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" successfully" Feb 13 15:43:15.091748 containerd[1509]: time="2025-02-13T15:43:15.091734215Z" level=info msg="StopPodSandbox for \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" returns successfully" Feb 13 15:43:15.092361 containerd[1509]: time="2025-02-13T15:43:15.091972454Z" level=info msg="RemovePodSandbox for \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\"" Feb 13 15:43:15.092361 containerd[1509]: time="2025-02-13T15:43:15.091995367Z" level=info msg="Forcibly stopping sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\"" Feb 13 15:43:15.092361 containerd[1509]: time="2025-02-13T15:43:15.092069968Z" level=info msg="TearDown network for sandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" successfully" Feb 13 15:43:15.096232 containerd[1509]: time="2025-02-13T15:43:15.096194771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.096307 containerd[1509]: time="2025-02-13T15:43:15.096246868Z" level=info msg="RemovePodSandbox \"35be3b8698f2aebc6ac20e943e911fbe39edd556e8aa4cf17dfe9350caf42566\" returns successfully" Feb 13 15:43:15.096554 containerd[1509]: time="2025-02-13T15:43:15.096531014Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:43:15.096643 containerd[1509]: time="2025-02-13T15:43:15.096626183Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:43:15.096682 containerd[1509]: time="2025-02-13T15:43:15.096641110Z" level=info msg="StopPodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:43:15.096978 containerd[1509]: time="2025-02-13T15:43:15.096948088Z" level=info msg="RemovePodSandbox for \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:43:15.096978 containerd[1509]: time="2025-02-13T15:43:15.096973988Z" level=info msg="Forcibly stopping sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\"" Feb 13 15:43:15.097095 containerd[1509]: time="2025-02-13T15:43:15.097061141Z" level=info msg="TearDown network for sandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" successfully" Feb 13 15:43:15.101489 containerd[1509]: time="2025-02-13T15:43:15.101447276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.101565 containerd[1509]: time="2025-02-13T15:43:15.101490228Z" level=info msg="RemovePodSandbox \"950e41dacceb2ca5f32d3b45a93e4960a8ca0479e03e49f6eafbedba1b2c5b1b\" returns successfully" Feb 13 15:43:15.101816 containerd[1509]: time="2025-02-13T15:43:15.101783760Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:43:15.101968 containerd[1509]: time="2025-02-13T15:43:15.101896081Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:43:15.101968 containerd[1509]: time="2025-02-13T15:43:15.101959821Z" level=info msg="StopPodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:43:15.102214 containerd[1509]: time="2025-02-13T15:43:15.102189083Z" level=info msg="RemovePodSandbox for \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:43:15.102256 containerd[1509]: time="2025-02-13T15:43:15.102215853Z" level=info msg="Forcibly stopping sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\"" Feb 13 15:43:15.102365 containerd[1509]: time="2025-02-13T15:43:15.102308457Z" level=info msg="TearDown network for sandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" successfully" Feb 13 15:43:15.106551 containerd[1509]: time="2025-02-13T15:43:15.106521035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.106631 containerd[1509]: time="2025-02-13T15:43:15.106569938Z" level=info msg="RemovePodSandbox \"51479e9be26afc44d71c6cc3c02006ec727004fd219a26471f9cdb4c718f290d\" returns successfully" Feb 13 15:43:15.106884 containerd[1509]: time="2025-02-13T15:43:15.106841770Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:43:15.106959 containerd[1509]: time="2025-02-13T15:43:15.106920818Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:43:15.106959 containerd[1509]: time="2025-02-13T15:43:15.106938090Z" level=info msg="StopPodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:43:15.107190 containerd[1509]: time="2025-02-13T15:43:15.107163675Z" level=info msg="RemovePodSandbox for \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:43:15.107241 containerd[1509]: time="2025-02-13T15:43:15.107190716Z" level=info msg="Forcibly stopping sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\"" Feb 13 15:43:15.107330 containerd[1509]: time="2025-02-13T15:43:15.107271709Z" level=info msg="TearDown network for sandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" successfully" Feb 13 15:43:15.112989 containerd[1509]: time="2025-02-13T15:43:15.112950186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.113089 containerd[1509]: time="2025-02-13T15:43:15.112992104Z" level=info msg="RemovePodSandbox \"62c22867b5c51e9ad1d951d4ea03f0659710776d438547009cc9c77881dcbf42\" returns successfully" Feb 13 15:43:15.113354 containerd[1509]: time="2025-02-13T15:43:15.113308200Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:43:15.113486 containerd[1509]: time="2025-02-13T15:43:15.113445328Z" level=info msg="TearDown network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" successfully" Feb 13 15:43:15.113486 containerd[1509]: time="2025-02-13T15:43:15.113463122Z" level=info msg="StopPodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" returns successfully" Feb 13 15:43:15.113742 containerd[1509]: time="2025-02-13T15:43:15.113718562Z" level=info msg="RemovePodSandbox for \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:43:15.113804 containerd[1509]: time="2025-02-13T15:43:15.113745022Z" level=info msg="Forcibly stopping sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\"" Feb 13 15:43:15.113860 containerd[1509]: time="2025-02-13T15:43:15.113826675Z" level=info msg="TearDown network for sandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" successfully" Feb 13 15:43:15.117484 containerd[1509]: time="2025-02-13T15:43:15.117449604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.117556 containerd[1509]: time="2025-02-13T15:43:15.117498195Z" level=info msg="RemovePodSandbox \"9822071dadeaa774c17d5454cf90d9ad4adbfb532c26629855355c9bcbbf1c2c\" returns successfully" Feb 13 15:43:15.117825 containerd[1509]: time="2025-02-13T15:43:15.117802298Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" Feb 13 15:43:15.117924 containerd[1509]: time="2025-02-13T15:43:15.117902346Z" level=info msg="TearDown network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" successfully" Feb 13 15:43:15.117924 containerd[1509]: time="2025-02-13T15:43:15.117916933Z" level=info msg="StopPodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" returns successfully" Feb 13 15:43:15.118205 containerd[1509]: time="2025-02-13T15:43:15.118183175Z" level=info msg="RemovePodSandbox for \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" Feb 13 15:43:15.118266 containerd[1509]: time="2025-02-13T15:43:15.118210426Z" level=info msg="Forcibly stopping sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\"" Feb 13 15:43:15.118348 containerd[1509]: time="2025-02-13T15:43:15.118289645Z" level=info msg="TearDown network for sandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" successfully" Feb 13 15:43:15.126662 containerd[1509]: time="2025-02-13T15:43:15.126599305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.126742 containerd[1509]: time="2025-02-13T15:43:15.126684886Z" level=info msg="RemovePodSandbox \"67addc92bde7e66734bf22636ed017cb86c0f4ca8f81f3a8b5e4775d5e33faff\" returns successfully" Feb 13 15:43:15.127079 containerd[1509]: time="2025-02-13T15:43:15.127046246Z" level=info msg="StopPodSandbox for \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\"" Feb 13 15:43:15.127176 containerd[1509]: time="2025-02-13T15:43:15.127153368Z" level=info msg="TearDown network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" successfully" Feb 13 15:43:15.127176 containerd[1509]: time="2025-02-13T15:43:15.127170900Z" level=info msg="StopPodSandbox for \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" returns successfully" Feb 13 15:43:15.127575 containerd[1509]: time="2025-02-13T15:43:15.127539575Z" level=info msg="RemovePodSandbox for \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\"" Feb 13 15:43:15.127575 containerd[1509]: time="2025-02-13T15:43:15.127573007Z" level=info msg="Forcibly stopping sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\"" Feb 13 15:43:15.127669 containerd[1509]: time="2025-02-13T15:43:15.127638851Z" level=info msg="TearDown network for sandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" successfully" Feb 13 15:43:15.130961 containerd[1509]: time="2025-02-13T15:43:15.130926730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.131021 containerd[1509]: time="2025-02-13T15:43:15.130972936Z" level=info msg="RemovePodSandbox \"58a54764e11ede0ed6e535950b95033bbd5b0c58684ec891db52a4fc02cabaaa\" returns successfully" Feb 13 15:43:15.131288 containerd[1509]: time="2025-02-13T15:43:15.131252853Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:15.131404 containerd[1509]: time="2025-02-13T15:43:15.131383048Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:43:15.131404 containerd[1509]: time="2025-02-13T15:43:15.131401713Z" level=info msg="StopPodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:43:15.131714 containerd[1509]: time="2025-02-13T15:43:15.131689435Z" level=info msg="RemovePodSandbox for \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:15.131762 containerd[1509]: time="2025-02-13T15:43:15.131719761Z" level=info msg="Forcibly stopping sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\"" Feb 13 15:43:15.131864 containerd[1509]: time="2025-02-13T15:43:15.131821944Z" level=info msg="TearDown network for sandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" successfully" Feb 13 15:43:15.135536 containerd[1509]: time="2025-02-13T15:43:15.135512490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.135582 containerd[1509]: time="2025-02-13T15:43:15.135555752Z" level=info msg="RemovePodSandbox \"3424dd0eadfc8879a254b030b081f77a40a61eff00aab39ee104a72d32231c49\" returns successfully" Feb 13 15:43:15.135867 containerd[1509]: time="2025-02-13T15:43:15.135836721Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:15.136016 containerd[1509]: time="2025-02-13T15:43:15.135945735Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:43:15.136016 containerd[1509]: time="2025-02-13T15:43:15.135957447Z" level=info msg="StopPodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:43:15.136247 containerd[1509]: time="2025-02-13T15:43:15.136218869Z" level=info msg="RemovePodSandbox for \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:15.136298 containerd[1509]: time="2025-02-13T15:43:15.136250969Z" level=info msg="Forcibly stopping sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\"" Feb 13 15:43:15.136393 containerd[1509]: time="2025-02-13T15:43:15.136372689Z" level=info msg="TearDown network for sandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" successfully" Feb 13 15:43:15.139880 containerd[1509]: time="2025-02-13T15:43:15.139847329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.139917 containerd[1509]: time="2025-02-13T15:43:15.139886041Z" level=info msg="RemovePodSandbox \"6e2acc5bfb290aba5b2766f744e10d7f65060feffa1fa1c35516c509afdabf43\" returns successfully" Feb 13 15:43:15.140240 containerd[1509]: time="2025-02-13T15:43:15.140195584Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:15.140347 containerd[1509]: time="2025-02-13T15:43:15.140286725Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:43:15.140347 containerd[1509]: time="2025-02-13T15:43:15.140303337Z" level=info msg="StopPodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:43:15.140607 containerd[1509]: time="2025-02-13T15:43:15.140582522Z" level=info msg="RemovePodSandbox for \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:15.140640 containerd[1509]: time="2025-02-13T15:43:15.140610705Z" level=info msg="Forcibly stopping sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\"" Feb 13 15:43:15.140733 containerd[1509]: time="2025-02-13T15:43:15.140696737Z" level=info msg="TearDown network for sandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" successfully" Feb 13 15:43:15.144161 containerd[1509]: time="2025-02-13T15:43:15.144132583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.144191 containerd[1509]: time="2025-02-13T15:43:15.144176487Z" level=info msg="RemovePodSandbox \"bace4b94a064b9d40e4425d400640d8fee504fa7490de67fd7388f01be4db07a\" returns successfully" Feb 13 15:43:15.144471 containerd[1509]: time="2025-02-13T15:43:15.144452737Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:15.144557 containerd[1509]: time="2025-02-13T15:43:15.144537415Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:43:15.144591 containerd[1509]: time="2025-02-13T15:43:15.144554427Z" level=info msg="StopPodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:43:15.144797 containerd[1509]: time="2025-02-13T15:43:15.144752340Z" level=info msg="RemovePodSandbox for \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:15.144797 containerd[1509]: time="2025-02-13T15:43:15.144774402Z" level=info msg="Forcibly stopping sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\"" Feb 13 15:43:15.144891 containerd[1509]: time="2025-02-13T15:43:15.144838572Z" level=info msg="TearDown network for sandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" successfully" Feb 13 15:43:15.148373 containerd[1509]: time="2025-02-13T15:43:15.148337097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.148373 containerd[1509]: time="2025-02-13T15:43:15.148371713Z" level=info msg="RemovePodSandbox \"054911f59bca5727993e2a50a02b0e317a84775d35308fe8aaea0de38d29af1c\" returns successfully" Feb 13 15:43:15.148865 containerd[1509]: time="2025-02-13T15:43:15.148701112Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:15.148865 containerd[1509]: time="2025-02-13T15:43:15.148796021Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:43:15.148865 containerd[1509]: time="2025-02-13T15:43:15.148806721Z" level=info msg="StopPodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:43:15.149129 containerd[1509]: time="2025-02-13T15:43:15.149097999Z" level=info msg="RemovePodSandbox for \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:15.149178 containerd[1509]: time="2025-02-13T15:43:15.149134458Z" level=info msg="Forcibly stopping sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\"" Feb 13 15:43:15.149263 containerd[1509]: time="2025-02-13T15:43:15.149223596Z" level=info msg="TearDown network for sandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" successfully" Feb 13 15:43:15.152829 containerd[1509]: time="2025-02-13T15:43:15.152791210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.152871 containerd[1509]: time="2025-02-13T15:43:15.152835503Z" level=info msg="RemovePodSandbox \"acc32deef50a61b912b7338437e735654799329eba08b4c5334a02983960fbf8\" returns successfully" Feb 13 15:43:15.153154 containerd[1509]: time="2025-02-13T15:43:15.153122934Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:15.153254 containerd[1509]: time="2025-02-13T15:43:15.153209868Z" level=info msg="TearDown network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" successfully" Feb 13 15:43:15.153254 containerd[1509]: time="2025-02-13T15:43:15.153220388Z" level=info msg="StopPodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" returns successfully" Feb 13 15:43:15.153510 containerd[1509]: time="2025-02-13T15:43:15.153487280Z" level=info msg="RemovePodSandbox for \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:15.153576 containerd[1509]: time="2025-02-13T15:43:15.153513179Z" level=info msg="Forcibly stopping sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\"" Feb 13 15:43:15.153601 containerd[1509]: time="2025-02-13T15:43:15.153580867Z" level=info msg="TearDown network for sandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" successfully" Feb 13 15:43:15.157222 containerd[1509]: time="2025-02-13T15:43:15.157190460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.157285 containerd[1509]: time="2025-02-13T15:43:15.157233100Z" level=info msg="RemovePodSandbox \"e6e03ece71398a6bcaa0d0f952b3a9435f4e46a57fd66da788e184c7def71a7a\" returns successfully" Feb 13 15:43:15.157586 containerd[1509]: time="2025-02-13T15:43:15.157564975Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" Feb 13 15:43:15.157667 containerd[1509]: time="2025-02-13T15:43:15.157651387Z" level=info msg="TearDown network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" successfully" Feb 13 15:43:15.157667 containerd[1509]: time="2025-02-13T15:43:15.157665324Z" level=info msg="StopPodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" returns successfully" Feb 13 15:43:15.157969 containerd[1509]: time="2025-02-13T15:43:15.157950361Z" level=info msg="RemovePodSandbox for \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" Feb 13 15:43:15.158002 containerd[1509]: time="2025-02-13T15:43:15.157970538Z" level=info msg="Forcibly stopping sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\"" Feb 13 15:43:15.158062 containerd[1509]: time="2025-02-13T15:43:15.158032444Z" level=info msg="TearDown network for sandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" successfully" Feb 13 15:43:15.161733 containerd[1509]: time="2025-02-13T15:43:15.161695969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.161795 containerd[1509]: time="2025-02-13T15:43:15.161742276Z" level=info msg="RemovePodSandbox \"9dd39a173cf457d9a14a869d7f504fd811331658317a680a3109696bee11b547\" returns successfully" Feb 13 15:43:15.162143 containerd[1509]: time="2025-02-13T15:43:15.162104729Z" level=info msg="StopPodSandbox for \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\"" Feb 13 15:43:15.162261 containerd[1509]: time="2025-02-13T15:43:15.162229915Z" level=info msg="TearDown network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" successfully" Feb 13 15:43:15.162261 containerd[1509]: time="2025-02-13T15:43:15.162251034Z" level=info msg="StopPodSandbox for \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" returns successfully" Feb 13 15:43:15.162544 containerd[1509]: time="2025-02-13T15:43:15.162518979Z" level=info msg="RemovePodSandbox for \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\"" Feb 13 15:43:15.162609 containerd[1509]: time="2025-02-13T15:43:15.162550067Z" level=info msg="Forcibly stopping sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\"" Feb 13 15:43:15.162678 containerd[1509]: time="2025-02-13T15:43:15.162633394Z" level=info msg="TearDown network for sandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" successfully" Feb 13 15:43:15.166391 containerd[1509]: time="2025-02-13T15:43:15.166368002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:43:15.166466 containerd[1509]: time="2025-02-13T15:43:15.166425972Z" level=info msg="RemovePodSandbox \"18e4e8c5b6bd347039d6593b2c0f680a8a80517b09550d4dfbf72df13ed00476\" returns successfully" Feb 13 15:43:15.435933 containerd[1509]: time="2025-02-13T15:43:15.435883885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:15.436612 containerd[1509]: time="2025-02-13T15:43:15.436581759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:43:15.437861 containerd[1509]: time="2025-02-13T15:43:15.437839987Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:15.439982 containerd[1509]: time="2025-02-13T15:43:15.439918359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:15.440498 containerd[1509]: time="2025-02-13T15:43:15.440452804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.567593454s" Feb 13 15:43:15.440498 containerd[1509]: time="2025-02-13T15:43:15.440481799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:43:15.442812 containerd[1509]: time="2025-02-13T15:43:15.441387083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:43:15.443764 containerd[1509]: time="2025-02-13T15:43:15.443732026Z" level=info msg="CreateContainer within sandbox \"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:43:15.465238 containerd[1509]: time="2025-02-13T15:43:15.465183369Z" level=info msg="CreateContainer within sandbox \"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"312671406a48cf97de36cd03cd7b444c2f8d69ecf0641d5904608f5e8cd3987d\"" Feb 13 15:43:15.465710 containerd[1509]: time="2025-02-13T15:43:15.465617446Z" level=info msg="StartContainer for \"312671406a48cf97de36cd03cd7b444c2f8d69ecf0641d5904608f5e8cd3987d\"" Feb 13 15:43:15.501468 systemd[1]: Started cri-containerd-312671406a48cf97de36cd03cd7b444c2f8d69ecf0641d5904608f5e8cd3987d.scope - libcontainer container 312671406a48cf97de36cd03cd7b444c2f8d69ecf0641d5904608f5e8cd3987d. Feb 13 15:43:15.534882 containerd[1509]: time="2025-02-13T15:43:15.534836883Z" level=info msg="StartContainer for \"312671406a48cf97de36cd03cd7b444c2f8d69ecf0641d5904608f5e8cd3987d\" returns successfully" Feb 13 15:43:15.889143 containerd[1509]: time="2025-02-13T15:43:15.889063828Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:15.889882 containerd[1509]: time="2025-02-13T15:43:15.889814422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:43:15.891979 containerd[1509]: time="2025-02-13T15:43:15.891942006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 449.576862ms" Feb 13 15:43:15.891979 containerd[1509]: time="2025-02-13T15:43:15.891970670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 15:43:15.892955 containerd[1509]: time="2025-02-13T15:43:15.892925297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:43:15.893956 containerd[1509]: time="2025-02-13T15:43:15.893926350Z" level=info msg="CreateContainer within sandbox \"605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:43:15.909548 containerd[1509]: time="2025-02-13T15:43:15.909332636Z" level=info msg="CreateContainer within sandbox \"605e3ca1aadb4becb6f0299c584a4a4b7cc602b4fbd4b0b7f87f1e323baefbc5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6cdab7d0370cfb63f40075c0d712b4afe62b8c424edd156ca4f0b468bb9ac41d\"" Feb 13 15:43:15.910517 containerd[1509]: time="2025-02-13T15:43:15.910454057Z" level=info msg="StartContainer for \"6cdab7d0370cfb63f40075c0d712b4afe62b8c424edd156ca4f0b468bb9ac41d\"" Feb 13 15:43:15.939453 systemd[1]: Started cri-containerd-6cdab7d0370cfb63f40075c0d712b4afe62b8c424edd156ca4f0b468bb9ac41d.scope - libcontainer container 6cdab7d0370cfb63f40075c0d712b4afe62b8c424edd156ca4f0b468bb9ac41d. Feb 13 15:43:16.001252 containerd[1509]: time="2025-02-13T15:43:16.001132381Z" level=info msg="StartContainer for \"6cdab7d0370cfb63f40075c0d712b4afe62b8c424edd156ca4f0b468bb9ac41d\" returns successfully" Feb 13 15:43:16.951206 kubelet[2620]: I0213 15:43:16.951065 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57b5c7b97f-nvw2j" podStartSLOduration=38.038817293 podStartE2EDuration="44.951043179s" podCreationTimestamp="2025-02-13 15:42:32 +0000 UTC" firstStartedPulling="2025-02-13 15:43:08.980538497 +0000 UTC m=+54.616302632" lastFinishedPulling="2025-02-13 15:43:15.892764383 +0000 UTC m=+61.528528518" observedRunningTime="2025-02-13 15:43:16.950352318 +0000 UTC m=+62.586116443" watchObservedRunningTime="2025-02-13 15:43:16.951043179 +0000 UTC m=+62.586807304" Feb 13 15:43:17.665189 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:41226.service - OpenSSH per-connection server daemon (10.0.0.1:41226). Feb 13 15:43:17.743351 sshd[6118]: Accepted publickey for core from 10.0.0.1 port 41226 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:17.745413 sshd-session[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:17.749957 systemd-logind[1495]: New session 16 of user core. Feb 13 15:43:17.759435 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:43:17.883197 sshd[6120]: Connection closed by 10.0.0.1 port 41226 Feb 13 15:43:17.883562 sshd-session[6118]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:17.887635 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:41226.service: Deactivated successfully. Feb 13 15:43:17.889799 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:43:17.890517 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:43:17.891411 systemd-logind[1495]: Removed session 16. Feb 13 15:43:17.941133 kubelet[2620]: I0213 15:43:17.941010 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:43:19.945496 kernel: hrtimer: interrupt took 2015943 ns Feb 13 15:43:20.709785 containerd[1509]: time="2025-02-13T15:43:20.709727369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:20.711111 containerd[1509]: time="2025-02-13T15:43:20.711056508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:43:20.712681 containerd[1509]: time="2025-02-13T15:43:20.712626260Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:20.715687 containerd[1509]: time="2025-02-13T15:43:20.715649236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:20.716303 containerd[1509]: time="2025-02-13T15:43:20.716272902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.823241567s" Feb 13 15:43:20.716357 containerd[1509]: time="2025-02-13T15:43:20.716303260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:43:20.717146 containerd[1509]: time="2025-02-13T15:43:20.717123999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:43:20.728238 containerd[1509]: time="2025-02-13T15:43:20.728193271Z" level=info msg="CreateContainer within sandbox \"8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:43:20.741609 containerd[1509]: time="2025-02-13T15:43:20.741562813Z" level=info msg="CreateContainer within sandbox \"8ceed642daf49c24de8b238f19fe89124adb3caa7c1f07710c0fdbc6cb870c3e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2b459fbccaeddcdaff7c1456423335198a10bf2f861069949e086ce321c8c77f\"" Feb 13 15:43:20.742361 containerd[1509]: time="2025-02-13T15:43:20.742285317Z" level=info msg="StartContainer for \"2b459fbccaeddcdaff7c1456423335198a10bf2f861069949e086ce321c8c77f\"" Feb 13 15:43:20.768484 systemd[1]: Started cri-containerd-2b459fbccaeddcdaff7c1456423335198a10bf2f861069949e086ce321c8c77f.scope - libcontainer container 2b459fbccaeddcdaff7c1456423335198a10bf2f861069949e086ce321c8c77f. Feb 13 15:43:20.816651 containerd[1509]: time="2025-02-13T15:43:20.816598623Z" level=info msg="StartContainer for \"2b459fbccaeddcdaff7c1456423335198a10bf2f861069949e086ce321c8c77f\" returns successfully" Feb 13 15:43:21.042915 kubelet[2620]: I0213 15:43:21.042706 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-677779d7f9-hvmnx" podStartSLOduration=36.963451909 podStartE2EDuration="48.042643017s" podCreationTimestamp="2025-02-13 15:42:33 +0000 UTC" firstStartedPulling="2025-02-13 15:43:09.637791775 +0000 UTC m=+55.273555900" lastFinishedPulling="2025-02-13 15:43:20.716982883 +0000 UTC m=+66.352747008" observedRunningTime="2025-02-13 15:43:21.042535845 +0000 UTC m=+66.678299970" watchObservedRunningTime="2025-02-13 15:43:21.042643017 +0000 UTC m=+66.678407142" Feb 13 15:43:22.896598 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:47596.service - OpenSSH per-connection server daemon (10.0.0.1:47596). Feb 13 15:43:23.055431 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 47596 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:23.057017 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:23.070399 systemd-logind[1495]: New session 17 of user core. Feb 13 15:43:23.080607 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:43:23.226415 sshd[6201]: Connection closed by 10.0.0.1 port 47596 Feb 13 15:43:23.226893 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:23.231623 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:47596.service: Deactivated successfully. Feb 13 15:43:23.234983 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:43:23.235839 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:43:23.237251 systemd-logind[1495]: Removed session 17. Feb 13 15:43:24.930956 containerd[1509]: time="2025-02-13T15:43:24.930872406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:24.933275 containerd[1509]: time="2025-02-13T15:43:24.933212211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:43:24.934635 containerd[1509]: time="2025-02-13T15:43:24.934565178Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:24.937301 containerd[1509]: time="2025-02-13T15:43:24.937255786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:43:24.937943 containerd[1509]: time="2025-02-13T15:43:24.937906197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 4.220756679s" Feb 13 15:43:24.937943 containerd[1509]: time="2025-02-13T15:43:24.937935171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:43:24.940340 containerd[1509]: time="2025-02-13T15:43:24.940220844Z" level=info msg="CreateContainer within sandbox \"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:43:24.957728 containerd[1509]: time="2025-02-13T15:43:24.957675791Z" level=info msg="CreateContainer within sandbox \"f5be915b065cfa196c199d99267b311d7bf3325e58a52e1772bff871181b9119\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08312802453515aa5313fdbfe432a0fca736744f545495703b69ff83683b0c82\"" Feb 13 15:43:24.959991 containerd[1509]: time="2025-02-13T15:43:24.958208177Z" level=info msg="StartContainer for \"08312802453515aa5313fdbfe432a0fca736744f545495703b69ff83683b0c82\"" Feb 13 15:43:24.993530 systemd[1]: Started cri-containerd-08312802453515aa5313fdbfe432a0fca736744f545495703b69ff83683b0c82.scope - libcontainer container 08312802453515aa5313fdbfe432a0fca736744f545495703b69ff83683b0c82. Feb 13 15:43:25.051808 containerd[1509]: time="2025-02-13T15:43:25.051745509Z" level=info msg="StartContainer for \"08312802453515aa5313fdbfe432a0fca736744f545495703b69ff83683b0c82\" returns successfully" Feb 13 15:43:25.551980 kubelet[2620]: I0213 15:43:25.551928 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:43:25.551980 kubelet[2620]: I0213 15:43:25.551968 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:43:26.095493 kubelet[2620]: I0213 15:43:26.095428 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wtt2x" podStartSLOduration=37.290847908 podStartE2EDuration="54.095408204s" podCreationTimestamp="2025-02-13 15:42:32 +0000 UTC" firstStartedPulling="2025-02-13 15:43:08.134049822 +0000 UTC m=+53.769813947" lastFinishedPulling="2025-02-13 15:43:24.938610118 +0000 UTC m=+70.574374243" observedRunningTime="2025-02-13 15:43:26.095244624 +0000 UTC m=+71.731008769" watchObservedRunningTime="2025-02-13 15:43:26.095408204 +0000 UTC m=+71.731172329" Feb 13 15:43:28.248714 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:47612.service - OpenSSH per-connection server daemon (10.0.0.1:47612). Feb 13 15:43:28.289386 sshd[6263]: Accepted publickey for core from 10.0.0.1 port 47612 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:28.290925 sshd-session[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:28.295109 systemd-logind[1495]: New session 18 of user core. Feb 13 15:43:28.302437 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:43:28.426184 sshd[6265]: Connection closed by 10.0.0.1 port 47612 Feb 13 15:43:28.426538 sshd-session[6263]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:28.440432 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:47612.service: Deactivated successfully. Feb 13 15:43:28.442572 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:43:28.444373 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:43:28.454563 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Feb 13 15:43:28.455500 systemd-logind[1495]: Removed session 18. Feb 13 15:43:28.478475 kubelet[2620]: E0213 15:43:28.478448 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:28.489700 sshd[6277]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:28.491215 sshd-session[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:28.495540 systemd-logind[1495]: New session 19 of user core. Feb 13 15:43:28.505523 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:43:29.235286 sshd[6280]: Connection closed by 10.0.0.1 port 47614 Feb 13 15:43:29.235012 sshd-session[6277]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:29.249671 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:47614.service: Deactivated successfully. Feb 13 15:43:29.251962 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:43:29.253413 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:43:29.260592 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:47618.service - OpenSSH per-connection server daemon (10.0.0.1:47618). Feb 13 15:43:29.261674 systemd-logind[1495]: Removed session 19. Feb 13 15:43:29.297273 sshd[6291]: Accepted publickey for core from 10.0.0.1 port 47618 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:29.299065 sshd-session[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:29.303753 systemd-logind[1495]: New session 20 of user core. Feb 13 15:43:29.314456 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:43:30.802502 sshd[6294]: Connection closed by 10.0.0.1 port 47618 Feb 13 15:43:30.803217 sshd-session[6291]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:30.821227 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:47618.service: Deactivated successfully. Feb 13 15:43:30.823109 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:43:30.824936 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:43:30.833595 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:44718.service - OpenSSH per-connection server daemon (10.0.0.1:44718). Feb 13 15:43:30.834827 systemd-logind[1495]: Removed session 20. Feb 13 15:43:30.869795 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:30.871527 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:30.876407 systemd-logind[1495]: New session 21 of user core. Feb 13 15:43:30.884479 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:43:31.316639 sshd[6337]: Connection closed by 10.0.0.1 port 44718 Feb 13 15:43:31.317071 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:31.329499 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:44718.service: Deactivated successfully. Feb 13 15:43:31.331675 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:43:31.333372 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:43:31.339639 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Feb 13 15:43:31.340690 systemd-logind[1495]: Removed session 21. Feb 13 15:43:31.374446 sshd[6348]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:31.375775 sshd-session[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:31.380423 systemd-logind[1495]: New session 22 of user core. Feb 13 15:43:31.388446 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:43:31.621365 sshd[6351]: Connection closed by 10.0.0.1 port 44730 Feb 13 15:43:31.621632 sshd-session[6348]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:31.625805 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:44730.service: Deactivated successfully. Feb 13 15:43:31.628080 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:43:31.628959 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:43:31.629915 systemd-logind[1495]: Removed session 22. Feb 13 15:43:32.174862 kubelet[2620]: I0213 15:43:32.174819 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:43:35.476168 kubelet[2620]: E0213 15:43:35.476123 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:36.633922 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:44734.service - OpenSSH per-connection server daemon (10.0.0.1:44734). Feb 13 15:43:36.672791 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 44734 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:36.674806 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:36.679505 systemd-logind[1495]: New session 23 of user core. Feb 13 15:43:36.685430 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:43:36.800598 sshd[6368]: Connection closed by 10.0.0.1 port 44734 Feb 13 15:43:36.800973 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:36.805200 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:44734.service: Deactivated successfully. Feb 13 15:43:36.807364 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:43:36.808011 systemd-logind[1495]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:43:36.808969 systemd-logind[1495]: Removed session 23. Feb 13 15:43:36.947484 kubelet[2620]: E0213 15:43:36.947305 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:41.813507 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:42074.service - OpenSSH per-connection server daemon (10.0.0.1:42074). Feb 13 15:43:41.857825 sshd[6409]: Accepted publickey for core from 10.0.0.1 port 42074 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:41.859656 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:41.864343 systemd-logind[1495]: New session 24 of user core. Feb 13 15:43:41.874602 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:43:41.985968 sshd[6411]: Connection closed by 10.0.0.1 port 42074 Feb 13 15:43:41.986401 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:41.990788 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:42074.service: Deactivated successfully. Feb 13 15:43:41.993142 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:43:41.993903 systemd-logind[1495]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:43:41.994844 systemd-logind[1495]: Removed session 24. Feb 13 15:43:44.475923 kubelet[2620]: E0213 15:43:44.475878 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:47.004299 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:42088.service - OpenSSH per-connection server daemon (10.0.0.1:42088). Feb 13 15:43:47.048281 sshd[6425]: Accepted publickey for core from 10.0.0.1 port 42088 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:47.050011 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:47.054438 systemd-logind[1495]: New session 25 of user core. Feb 13 15:43:47.061516 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:43:47.186586 sshd[6427]: Connection closed by 10.0.0.1 port 42088 Feb 13 15:43:47.186987 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:47.190895 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:42088.service: Deactivated successfully. Feb 13 15:43:47.193432 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:43:47.194157 systemd-logind[1495]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:43:47.195053 systemd-logind[1495]: Removed session 25. Feb 13 15:43:51.475865 kubelet[2620]: E0213 15:43:51.475822 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:52.201940 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:49404.service - OpenSSH per-connection server daemon (10.0.0.1:49404). Feb 13 15:43:52.240687 sshd[6470]: Accepted publickey for core from 10.0.0.1 port 49404 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:52.242395 sshd-session[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:52.247053 systemd-logind[1495]: New session 26 of user core. Feb 13 15:43:52.258521 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:43:52.375165 sshd[6472]: Connection closed by 10.0.0.1 port 49404 Feb 13 15:43:52.375628 sshd-session[6470]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:52.380352 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:49404.service: Deactivated successfully. Feb 13 15:43:52.382650 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:43:52.383588 systemd-logind[1495]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:43:52.384746 systemd-logind[1495]: Removed session 26.