Jan 13 20:44:49.950287 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:44:49.950326 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:49.950342 kernel: BIOS-provided physical RAM map: Jan 13 20:44:49.950349 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:44:49.950355 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:44:49.950363 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:44:49.950374 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:44:49.950383 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:44:49.950390 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:44:49.950398 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:44:49.950408 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 13 20:44:49.950414 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:44:49.950424 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:44:49.950431 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:44:49.950441 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:44:49.950448 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:44:49.950458 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:44:49.950465 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:44:49.950472 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:44:49.950478 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:44:49.950485 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:44:49.950492 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:44:49.950498 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:44:49.950505 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:44:49.950512 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:44:49.950519 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:44:49.950525 kernel: NX (Execute Disable) protection: active Jan 13 20:44:49.950535 kernel: APIC: Static calls initialized Jan 13 20:44:49.950543 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:44:49.950552 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:44:49.950578 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:44:49.950587 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:44:49.950596 kernel: extended physical RAM map: Jan 13 20:44:49.950605 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:44:49.950612 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:44:49.950618 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:44:49.950625 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:44:49.950632 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:44:49.950643 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:44:49.950650 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:44:49.950661 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 13 20:44:49.950668 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 13 20:44:49.950675 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 13 20:44:49.950683 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 13 20:44:49.950692 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 13 20:44:49.950709 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:44:49.950716 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:44:49.950723 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:44:49.950730 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:44:49.950738 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:44:49.950745 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:44:49.950752 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:44:49.950759 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:44:49.950766 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:44:49.950776 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:44:49.950783 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:44:49.950790 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:44:49.950797 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:44:49.950807 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:44:49.950814 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:44:49.950821 kernel: efi: EFI v2.7 by EDK II Jan 13 20:44:49.950828 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 13 20:44:49.950836 kernel: random: crng init done Jan 13 20:44:49.950843 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 13 20:44:49.950850 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 13 20:44:49.950862 kernel: secureboot: Secure boot disabled Jan 13 20:44:49.950869 kernel: SMBIOS 2.8 present. Jan 13 20:44:49.950876 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 13 20:44:49.950883 kernel: Hypervisor detected: KVM Jan 13 20:44:49.950890 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:44:49.950897 kernel: kvm-clock: using sched offset of 3863957690 cycles Jan 13 20:44:49.950905 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:44:49.950913 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:44:49.950921 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:44:49.950928 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:44:49.950938 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 13 20:44:49.950945 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 20:44:49.950952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:44:49.950960 kernel: Using GB pages for direct mapping Jan 13 20:44:49.950968 kernel: ACPI: Early table checksum verification disabled Jan 13 20:44:49.950982 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 20:44:49.950998 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:44:49.951008 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951018 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951033 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 20:44:49.951044 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951054 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951064 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951074 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:49.951083 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:44:49.951093 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 20:44:49.951102 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 20:44:49.951110 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 20:44:49.951121 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 20:44:49.951128 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 20:44:49.951135 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 20:44:49.951142 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 20:44:49.951150 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 20:44:49.951157 kernel: No NUMA configuration found Jan 13 20:44:49.951164 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 13 20:44:49.951171 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 13 20:44:49.951179 kernel: Zone ranges: Jan 13 20:44:49.951189 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:44:49.951196 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 13 20:44:49.951203 kernel: Normal empty Jan 13 20:44:49.951214 kernel: Movable zone start for each node Jan 13 20:44:49.951222 kernel: Early memory node ranges Jan 13 20:44:49.951229 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 20:44:49.951236 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 20:44:49.951244 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 20:44:49.951251 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 13 20:44:49.951259 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 13 20:44:49.951268 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 13 20:44:49.951276 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 13 20:44:49.951283 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 13 20:44:49.951290 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 13 20:44:49.951308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:44:49.951315 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 20:44:49.951341 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 20:44:49.951355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:44:49.951363 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 13 20:44:49.951370 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 13 20:44:49.951378 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:44:49.951389 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 13 20:44:49.951399 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 13 20:44:49.951407 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:44:49.951415 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:44:49.951422 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:44:49.951432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:44:49.951440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:44:49.951448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:44:49.951455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:44:49.951463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:44:49.951471 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:44:49.951478 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:44:49.951486 kernel: TSC deadline timer available Jan 13 20:44:49.951493 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:44:49.951506 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:44:49.951517 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:44:49.951527 kernel: kvm-guest: setup PV sched yield Jan 13 20:44:49.951536 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 13 20:44:49.951546 kernel: Booting paravirtualized kernel on KVM Jan 13 20:44:49.951589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:44:49.951617 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:44:49.951628 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:44:49.951638 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:44:49.951655 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:44:49.951667 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:44:49.951679 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:44:49.951693 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:49.951705 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:44:49.951716 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:44:49.951731 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:44:49.951742 kernel: Fallback order for Node 0: 0 Jan 13 20:44:49.951754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 13 20:44:49.951768 kernel: Policy zone: DMA32 Jan 13 20:44:49.951779 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:44:49.951791 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Jan 13 20:44:49.951802 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:44:49.951814 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:44:49.951825 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:44:49.951836 kernel: Dynamic Preempt: voluntary Jan 13 20:44:49.951847 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:44:49.951859 kernel: rcu: RCU event tracing is enabled. Jan 13 20:44:49.951873 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:44:49.951885 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:44:49.951895 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:44:49.951906 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:44:49.951917 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:44:49.951927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:44:49.951938 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:44:49.951948 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:44:49.951959 kernel: Console: colour dummy device 80x25 Jan 13 20:44:49.951973 kernel: printk: console [ttyS0] enabled Jan 13 20:44:49.951984 kernel: ACPI: Core revision 20230628 Jan 13 20:44:49.951996 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:44:49.952005 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:44:49.952016 kernel: x2apic enabled Jan 13 20:44:49.952026 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:44:49.952038 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:44:49.952047 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:44:49.952056 kernel: kvm-guest: setup PV IPIs Jan 13 20:44:49.952069 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:44:49.952078 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:44:49.952086 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:44:49.952094 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:44:49.952101 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:44:49.952109 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:44:49.952117 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:44:49.952124 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:44:49.952132 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:44:49.952142 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:44:49.952150 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:44:49.952157 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:44:49.952165 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:44:49.952173 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:44:49.952181 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:44:49.952189 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:44:49.952199 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:44:49.952210 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:44:49.952218 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:44:49.952225 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:44:49.952233 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:44:49.952241 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:44:49.952248 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:44:49.952256 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:44:49.952264 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:44:49.952271 kernel: landlock: Up and running. Jan 13 20:44:49.952282 kernel: SELinux: Initializing. Jan 13 20:44:49.952289 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:44:49.952306 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:44:49.952314 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:44:49.952322 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:49.952338 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:49.952351 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:49.952361 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:44:49.952369 kernel: ... version: 0 Jan 13 20:44:49.952381 kernel: ... bit width: 48 Jan 13 20:44:49.952389 kernel: ... generic registers: 6 Jan 13 20:44:49.952397 kernel: ... value mask: 0000ffffffffffff Jan 13 20:44:49.952404 kernel: ... max period: 00007fffffffffff Jan 13 20:44:49.952412 kernel: ... fixed-purpose events: 0 Jan 13 20:44:49.952423 kernel: ... event mask: 000000000000003f Jan 13 20:44:49.952434 kernel: signal: max sigframe size: 1776 Jan 13 20:44:49.952445 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:44:49.952453 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:44:49.952467 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:44:49.952478 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:44:49.952486 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:44:49.952493 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:44:49.952501 kernel: smpboot: Max logical packages: 1 Jan 13 20:44:49.952509 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:44:49.952516 kernel: devtmpfs: initialized Jan 13 20:44:49.952524 kernel: x86/mm: Memory block size: 128MB Jan 13 20:44:49.952533 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 20:44:49.952547 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 20:44:49.952585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 13 20:44:49.952594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 20:44:49.952601 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 13 20:44:49.952609 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 20:44:49.952617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:44:49.952625 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:44:49.952633 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:44:49.952644 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:44:49.952658 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:44:49.952668 kernel: audit: type=2000 audit(1736801090.065:1): state=initialized audit_enabled=0 res=1 Jan 13 20:44:49.952678 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:44:49.952688 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:44:49.952696 kernel: cpuidle: using governor menu Jan 13 20:44:49.952703 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:44:49.952711 kernel: dca service started, version 1.12.1 Jan 13 20:44:49.952719 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:44:49.952727 kernel: PCI: Using configuration type 1 for base access Jan 13 20:44:49.952738 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:44:49.952749 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:44:49.952760 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:44:49.952769 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:44:49.952780 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:44:49.952790 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:44:49.952801 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:44:49.952812 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:44:49.952819 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:44:49.952830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:44:49.952838 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:44:49.952845 kernel: ACPI: Interpreter enabled Jan 13 20:44:49.952853 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:44:49.952861 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:44:49.952868 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:44:49.952876 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:44:49.952884 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:44:49.952891 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:44:49.953149 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:44:49.953295 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:44:49.953457 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:44:49.953475 kernel: PCI host bridge to bus 0000:00 Jan 13 20:44:49.953663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:44:49.953782 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:44:49.953906 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:44:49.954036 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 13 20:44:49.954158 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 13 20:44:49.954275 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:44:49.954407 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:44:49.954629 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:44:49.954781 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:44:49.954927 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 20:44:49.955074 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 20:44:49.955257 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 20:44:49.955431 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 20:44:49.955604 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:44:49.955769 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:44:49.955908 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 20:44:49.956056 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 20:44:49.956213 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 13 20:44:49.956380 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:44:49.956516 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 20:44:49.956677 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 20:44:49.956807 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 13 20:44:49.956960 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:44:49.957128 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 20:44:49.957276 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 20:44:49.957452 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 13 20:44:49.957671 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 20:44:49.957865 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:44:49.958032 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:44:49.958188 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:44:49.958323 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 20:44:49.958576 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 20:44:49.958724 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:44:49.958867 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 20:44:49.958879 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:44:49.958889 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:44:49.958914 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:44:49.958922 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:44:49.958930 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:44:49.958937 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:44:49.958945 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:44:49.958953 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:44:49.958960 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:44:49.958967 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:44:49.958975 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:44:49.958985 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:44:49.958993 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:44:49.959001 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:44:49.959008 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:44:49.959016 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:44:49.959023 kernel: iommu: Default domain type: Translated Jan 13 20:44:49.959031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:44:49.959038 kernel: efivars: Registered efivars operations Jan 13 20:44:49.959046 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:44:49.959056 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:44:49.959064 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 20:44:49.959072 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 13 20:44:49.959079 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 13 20:44:49.959086 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 13 20:44:49.959094 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 13 20:44:49.959102 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 13 20:44:49.959110 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 13 20:44:49.959117 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 13 20:44:49.959277 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:44:49.959439 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:44:49.959598 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:44:49.959615 kernel: vgaarb: loaded Jan 13 20:44:49.959624 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:44:49.959632 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:44:49.959640 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:44:49.959648 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:44:49.959661 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:44:49.959668 kernel: pnp: PnP ACPI init Jan 13 20:44:49.959957 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 13 20:44:49.959978 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:44:49.959987 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:44:49.959997 kernel: NET: Registered PF_INET protocol family Jan 13 20:44:49.960029 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:44:49.960041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:44:49.960052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:44:49.960060 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:44:49.960068 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:44:49.960076 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:44:49.960084 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:44:49.960093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:44:49.960105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:44:49.960115 kernel: NET: Registered PF_XDP protocol family Jan 13 20:44:49.960269 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 20:44:49.960442 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 20:44:49.960610 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:44:49.960735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:44:49.960851 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:44:49.961012 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 13 20:44:49.961218 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 13 20:44:49.961365 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:44:49.961381 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:44:49.961399 kernel: Initialise system trusted keyrings Jan 13 20:44:49.961410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:44:49.961419 kernel: Key type asymmetric registered Jan 13 20:44:49.961426 kernel: Asymmetric key parser 'x509' registered Jan 13 20:44:49.961446 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:44:49.961456 kernel: io scheduler mq-deadline registered Jan 13 20:44:49.961472 kernel: io scheduler kyber registered Jan 13 20:44:49.961486 kernel: io scheduler bfq registered Jan 13 20:44:49.961497 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:44:49.961512 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:44:49.961520 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:44:49.961531 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:44:49.961539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:44:49.961547 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:44:49.961556 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:44:49.961581 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:44:49.961589 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:44:49.961772 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:44:49.961790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:44:49.961927 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:44:49.962068 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:44:49 UTC (1736801089) Jan 13 20:44:49.962218 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:44:49.962239 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:44:49.962250 kernel: efifb: probing for efifb Jan 13 20:44:49.962260 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 13 20:44:49.962271 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 13 20:44:49.962281 kernel: efifb: scrolling: redraw Jan 13 20:44:49.962291 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 13 20:44:49.962309 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:44:49.962318 kernel: fb0: EFI VGA frame buffer device Jan 13 20:44:49.962333 kernel: pstore: Using crash dump compression: deflate Jan 13 20:44:49.962352 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:44:49.962363 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:44:49.962371 kernel: Segment Routing with IPv6 Jan 13 20:44:49.962379 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:44:49.962387 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:44:49.962395 kernel: Key type dns_resolver registered Jan 13 20:44:49.962403 kernel: IPI shorthand broadcast: enabled Jan 13 20:44:49.962411 kernel: sched_clock: Marking stable (1285003808, 186025092)->(1539925410, -68896510) Jan 13 20:44:49.962419 kernel: registered taskstats version 1 Jan 13 20:44:49.962428 kernel: Loading compiled-in X.509 certificates Jan 13 20:44:49.962438 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:44:49.962446 kernel: Key type .fscrypt registered Jan 13 20:44:49.962454 kernel: Key type fscrypt-provisioning registered Jan 13 20:44:49.962462 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:44:49.962471 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:44:49.962479 kernel: ima: No architecture policies found Jan 13 20:44:49.962487 kernel: clk: Disabling unused clocks Jan 13 20:44:49.962495 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:44:49.962506 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:44:49.962514 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:44:49.962525 kernel: Run /init as init process Jan 13 20:44:49.962537 kernel: with arguments: Jan 13 20:44:49.962548 kernel: /init Jan 13 20:44:49.962620 kernel: with environment: Jan 13 20:44:49.962630 kernel: HOME=/ Jan 13 20:44:49.962638 kernel: TERM=linux Jan 13 20:44:49.962646 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:44:49.962657 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:44:49.962671 systemd[1]: Detected virtualization kvm. Jan 13 20:44:49.962681 systemd[1]: Detected architecture x86-64. Jan 13 20:44:49.962689 systemd[1]: Running in initrd. Jan 13 20:44:49.962697 systemd[1]: No hostname configured, using default hostname. Jan 13 20:44:49.962706 systemd[1]: Hostname set to . Jan 13 20:44:49.962714 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:44:49.962723 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:44:49.962734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:49.962743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:49.962752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:44:49.962761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:44:49.962770 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:44:49.962779 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:44:49.962789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:44:49.962800 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:44:49.962809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:49.962837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:49.962862 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:44:49.962884 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:44:49.962895 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:44:49.962904 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:44:49.962912 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:44:49.962925 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:44:49.962936 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:44:49.962948 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:44:49.962958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:49.962967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:49.962981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:49.962990 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:44:49.962998 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:44:49.963011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:44:49.963020 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:44:49.963028 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:44:49.963037 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:44:49.963045 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:44:49.963053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:49.963062 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:44:49.963070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:49.963079 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:44:49.963118 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 20:44:49.963142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:44:49.963151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:49.963160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:49.963168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:44:49.963177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:44:49.963186 systemd-journald[194]: Journal started Jan 13 20:44:49.963207 systemd-journald[194]: Runtime Journal (/run/log/journal/37a8838270e9433093dc61e5fcfb1e95) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:44:49.961512 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:44:49.967593 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:44:49.971484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:44:49.981792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:49.985189 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:49.993759 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:44:49.995174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:50.005597 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:44:50.006744 dracut-cmdline[220]: dracut-dracut-053 Jan 13 20:44:50.008573 kernel: Bridge firewalling registered Jan 13 20:44:50.008583 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:44:50.010271 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:50.015605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:50.023793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:44:50.035141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:50.043699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:44:50.075333 systemd-resolved[263]: Positive Trust Anchors: Jan 13 20:44:50.075362 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:44:50.075395 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:44:50.078171 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 13 20:44:50.079492 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:44:50.087247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:50.107602 kernel: SCSI subsystem initialized Jan 13 20:44:50.117608 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:44:50.128602 kernel: iscsi: registered transport (tcp) Jan 13 20:44:50.157057 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:44:50.157150 kernel: QLogic iSCSI HBA Driver Jan 13 20:44:50.219279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:44:50.237770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:44:50.267783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:44:50.267832 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:44:50.268927 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:44:50.318609 kernel: raid6: avx2x4 gen() 20283 MB/s Jan 13 20:44:50.335624 kernel: raid6: avx2x2 gen() 21473 MB/s Jan 13 20:44:50.352996 kernel: raid6: avx2x1 gen() 15046 MB/s Jan 13 20:44:50.353076 kernel: raid6: using algorithm avx2x2 gen() 21473 MB/s Jan 13 20:44:50.371058 kernel: raid6: .... xor() 12569 MB/s, rmw enabled Jan 13 20:44:50.371163 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:44:50.398625 kernel: xor: automatically using best checksumming function avx Jan 13 20:44:50.611620 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:44:50.629030 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:44:50.644880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:50.661008 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 20:44:50.691325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:50.703778 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:44:50.717552 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 13 20:44:50.760744 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:44:50.781019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:44:50.852866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:50.858800 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:44:50.879284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:44:50.883211 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:44:50.886549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:50.892375 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:44:50.958643 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:44:50.958660 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:44:50.958815 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:44:50.958827 kernel: GPT:9289727 != 19775487 Jan 13 20:44:50.958838 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:44:50.958857 kernel: GPT:9289727 != 19775487 Jan 13 20:44:50.958869 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:44:50.958880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:50.958890 kernel: libata version 3.00 loaded. Jan 13 20:44:50.890365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:44:50.899242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:44:50.963294 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:44:50.963311 kernel: AES CTR mode by8 optimization enabled Jan 13 20:44:50.915602 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:44:50.952322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:44:50.952513 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:50.998651 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:44:51.054195 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:44:51.054214 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:44:51.054391 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:44:51.054534 kernel: scsi host0: ahci Jan 13 20:44:51.054712 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (471) Jan 13 20:44:51.054732 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (462) Jan 13 20:44:51.054743 kernel: scsi host1: ahci Jan 13 20:44:51.054895 kernel: scsi host2: ahci Jan 13 20:44:51.055052 kernel: scsi host3: ahci Jan 13 20:44:51.055209 kernel: scsi host4: ahci Jan 13 20:44:51.055404 kernel: scsi host5: ahci Jan 13 20:44:51.055553 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 20:44:51.055598 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 20:44:51.055609 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 20:44:51.055620 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 20:44:51.055630 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 20:44:51.055641 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 20:44:50.954729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:50.955889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:50.956199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:50.957493 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:51.000188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:51.025136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:44:51.051920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:44:51.063659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:44:51.064887 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:44:51.072484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:44:51.083757 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:44:51.084016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:51.084076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:51.084421 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:51.085509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:51.104513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:51.136687 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:51.152162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:51.363672 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:51.363762 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:51.363776 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:51.363789 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:51.364578 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:44:51.365591 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:51.366596 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:44:51.366616 kernel: ata3.00: applying bridge limits Jan 13 20:44:51.367637 kernel: ata3.00: configured for UDMA/100 Jan 13 20:44:51.368592 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:44:51.392659 disk-uuid[569]: Primary Header is updated. Jan 13 20:44:51.392659 disk-uuid[569]: Secondary Entries is updated. Jan 13 20:44:51.392659 disk-uuid[569]: Secondary Header is updated. Jan 13 20:44:51.396626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:51.401587 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:51.416659 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:44:51.442496 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:44:51.442522 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:44:52.429724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:52.430274 disk-uuid[583]: The operation has completed successfully. Jan 13 20:44:52.459603 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:44:52.459734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:44:52.485844 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:44:52.489749 sh[598]: Success Jan 13 20:44:52.504592 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:44:52.538822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:44:52.554148 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:44:52.589188 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:44:52.598087 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:44:52.598114 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:52.598125 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:44:52.599130 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:44:52.599879 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:44:52.605081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:44:52.607422 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:44:52.613721 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:44:52.656859 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:44:52.668092 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:52.668160 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:52.668179 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:52.671587 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:52.681193 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:44:52.683010 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:52.775610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:44:52.802785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:44:52.826205 systemd-networkd[776]: lo: Link UP Jan 13 20:44:52.826217 systemd-networkd[776]: lo: Gained carrier Jan 13 20:44:52.827913 systemd-networkd[776]: Enumeration completed Jan 13 20:44:52.828062 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:44:52.828321 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:52.828325 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:44:52.829726 systemd[1]: Reached target network.target - Network. Jan 13 20:44:52.830544 systemd-networkd[776]: eth0: Link UP Jan 13 20:44:52.830550 systemd-networkd[776]: eth0: Gained carrier Jan 13 20:44:52.830582 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:52.852638 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:44:53.009907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:44:53.016744 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:44:53.074178 ignition[781]: Ignition 2.20.0 Jan 13 20:44:53.074195 ignition[781]: Stage: fetch-offline Jan 13 20:44:53.074264 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:53.074279 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:53.074413 ignition[781]: parsed url from cmdline: "" Jan 13 20:44:53.074418 ignition[781]: no config URL provided Jan 13 20:44:53.074426 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:44:53.074440 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:44:53.074483 ignition[781]: op(1): [started] loading QEMU firmware config module Jan 13 20:44:53.074490 ignition[781]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:44:53.087454 ignition[781]: op(1): [finished] loading QEMU firmware config module Jan 13 20:44:53.128096 ignition[781]: parsing config with SHA512: 7098f1cab3bac37aeddb43ef77236e5c6d71aa4aa05afc96237c10c243a6f214560466868d635dfadaa5c9d3beb2822f94c3edda7bc414a274b86e9bfd13c7da Jan 13 20:44:53.133763 unknown[781]: fetched base config from "system" Jan 13 20:44:53.133797 unknown[781]: fetched user config from "qemu" Jan 13 20:44:53.134508 ignition[781]: fetch-offline: fetch-offline passed Jan 13 20:44:53.134670 ignition[781]: Ignition finished successfully Jan 13 20:44:53.137385 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:44:53.138251 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:44:53.143836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:44:53.157014 ignition[790]: Ignition 2.20.0 Jan 13 20:44:53.157027 ignition[790]: Stage: kargs Jan 13 20:44:53.157172 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:53.157184 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:53.160911 ignition[790]: kargs: kargs passed Jan 13 20:44:53.160963 ignition[790]: Ignition finished successfully Jan 13 20:44:53.165233 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:44:53.180704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:44:53.192854 ignition[798]: Ignition 2.20.0 Jan 13 20:44:53.192865 ignition[798]: Stage: disks Jan 13 20:44:53.193032 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:53.193045 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:53.193875 ignition[798]: disks: disks passed Jan 13 20:44:53.196250 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:44:53.193922 ignition[798]: Ignition finished successfully Jan 13 20:44:53.197771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:44:53.199358 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:44:53.201543 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:44:53.202615 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:44:53.203042 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:44:53.221757 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:44:53.233802 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:44:53.370139 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:44:53.378756 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:44:53.483606 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:44:53.484029 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:44:53.485628 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:44:53.498665 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:44:53.500845 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:44:53.502500 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:44:53.507903 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Jan 13 20:44:53.507924 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:53.502573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:44:53.514757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:53.514780 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:53.514795 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:53.502607 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:44:53.509997 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:44:53.516248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:44:53.519443 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:44:53.567273 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:44:53.571877 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:44:53.576039 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:44:53.579755 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:44:53.668665 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:44:53.676724 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:44:53.681800 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:44:53.684729 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:44:53.686071 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:53.707201 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:44:53.817419 ignition[934]: INFO : Ignition 2.20.0 Jan 13 20:44:53.817419 ignition[934]: INFO : Stage: mount Jan 13 20:44:53.830664 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:53.830664 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:53.830664 ignition[934]: INFO : mount: mount passed Jan 13 20:44:53.830664 ignition[934]: INFO : Ignition finished successfully Jan 13 20:44:53.820914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:44:53.836706 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:44:53.846032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:44:53.857595 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Jan 13 20:44:53.859639 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:53.859665 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:53.859681 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:53.862582 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:53.864449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:44:53.900375 ignition[961]: INFO : Ignition 2.20.0 Jan 13 20:44:53.900375 ignition[961]: INFO : Stage: files Jan 13 20:44:53.902432 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:53.902432 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:53.905420 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:44:53.906771 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:44:53.906771 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:44:53.910612 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:44:53.912244 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:44:53.912244 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:44:53.911299 unknown[961]: wrote ssh authorized keys file for user: core Jan 13 20:44:53.916354 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:44:53.916354 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:44:53.931683 systemd-networkd[776]: eth0: Gained IPv6LL Jan 13 20:44:53.950522 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:44:54.071364 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:44:54.071364 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:54.075386 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:44:54.411474 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:44:54.677603 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:54.677603 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:44:54.681451 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:44:54.706537 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:44:54.713427 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:44:54.715286 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:44:54.715286 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:44:54.715286 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:44:54.715286 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:44:54.715286 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:44:54.715286 ignition[961]: INFO : files: files passed Jan 13 20:44:54.715286 ignition[961]: INFO : Ignition finished successfully Jan 13 20:44:54.716409 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:44:54.729813 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:44:54.733222 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:44:54.735500 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:44:54.735645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:44:54.744888 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:44:54.747891 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:54.747891 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:54.752552 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:54.750788 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:44:54.752790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:44:54.763753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:44:54.798852 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:44:54.799042 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:44:54.800193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:44:54.804237 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:44:54.804989 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:44:54.808064 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:44:54.831753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:44:54.849866 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:44:54.859666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:54.861011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:54.863305 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:44:54.865501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:44:54.865662 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:44:54.867824 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:44:54.869533 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:44:54.871541 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:44:54.873617 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:44:54.875605 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:44:54.877748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:44:54.879874 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:44:54.882126 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:44:54.884330 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:44:54.886860 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:44:54.888644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:44:54.888790 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:44:54.890909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:54.892576 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:54.894605 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:44:54.894758 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:54.896796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:44:54.896929 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:44:54.899388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:44:54.899535 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:44:54.901871 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:44:54.903781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:44:54.907636 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:54.909675 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:44:54.911903 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:44:54.913849 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:44:54.913973 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:44:54.915866 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:44:54.915966 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:44:54.918360 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:44:54.918491 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:44:54.920407 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:44:54.920519 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:44:54.937873 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:44:54.939822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:44:54.939970 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:54.943113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:44:54.943992 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:44:54.944232 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:54.946274 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:44:54.946509 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:44:54.951224 ignition[1016]: INFO : Ignition 2.20.0 Jan 13 20:44:54.951224 ignition[1016]: INFO : Stage: umount Jan 13 20:44:54.951224 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:54.951224 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:54.951224 ignition[1016]: INFO : umount: umount passed Jan 13 20:44:54.951224 ignition[1016]: INFO : Ignition finished successfully Jan 13 20:44:54.956924 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:44:54.957049 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:44:54.960294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:44:54.960409 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:44:54.963196 systemd[1]: Stopped target network.target - Network. Jan 13 20:44:54.965011 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:44:54.965088 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:44:54.967031 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:44:54.967082 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:44:54.969047 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:44:54.969097 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:44:54.969399 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:44:54.969450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:44:54.970300 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:44:54.970974 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:44:54.979605 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:44:54.980650 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 13 20:44:54.981351 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:44:54.981521 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:44:54.984720 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:44:54.984933 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:44:54.988516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:44:54.988604 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:55.002791 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:44:55.003323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:44:55.003402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:44:55.004107 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:44:55.004163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:55.004436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:44:55.004488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:55.004644 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:44:55.004705 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:55.005236 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:55.015550 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:44:55.015987 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:44:55.038544 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:44:55.038769 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:55.041122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:44:55.041187 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:55.043256 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:44:55.043302 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:55.045338 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:44:55.045393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:44:55.047784 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:44:55.047837 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:44:55.067023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:44:55.067078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:55.079793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:44:55.080881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:44:55.080953 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:55.083113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:55.083167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:55.087164 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:44:55.087295 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:44:55.151039 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:44:55.151235 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:44:55.153701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:44:55.155137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:44:55.155216 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:44:55.173726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:44:55.181286 systemd[1]: Switching root. Jan 13 20:44:55.216349 systemd-journald[194]: Journal stopped Jan 13 20:44:56.377264 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 20:44:56.377347 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:44:56.377367 kernel: SELinux: policy capability open_perms=1 Jan 13 20:44:56.377384 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:44:56.377399 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:44:56.377422 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:44:56.377438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:44:56.377454 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:44:56.377470 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:44:56.377491 kernel: audit: type=1403 audit(1736801095.586:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:44:56.377508 systemd[1]: Successfully loaded SELinux policy in 52.268ms. Jan 13 20:44:56.377536 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.819ms. Jan 13 20:44:56.377555 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:44:56.377593 systemd[1]: Detected virtualization kvm. Jan 13 20:44:56.377611 systemd[1]: Detected architecture x86-64. Jan 13 20:44:56.377627 systemd[1]: Detected first boot. Jan 13 20:44:56.377647 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:44:56.377664 zram_generator::config[1061]: No configuration found. Jan 13 20:44:56.377694 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:44:56.377711 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:44:56.377728 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:44:56.377746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:44:56.377765 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:44:56.377782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:44:56.377799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:44:56.377816 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:44:56.377838 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:44:56.377855 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:44:56.377872 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:44:56.377889 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:44:56.377907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:56.377924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:56.377941 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:44:56.377965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:44:56.377987 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:44:56.378004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:44:56.378021 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:44:56.378040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:56.378057 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:44:56.378073 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:44:56.378091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:44:56.378108 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:44:56.378128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:56.378161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:44:56.378178 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:44:56.378195 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:44:56.378212 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:44:56.378236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:44:56.378253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:56.378270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:56.378287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:56.378308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:44:56.378325 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:44:56.378342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:44:56.378360 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:44:56.378377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:56.378394 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:44:56.378410 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:44:56.378429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:44:56.378453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:44:56.378474 systemd[1]: Reached target machines.target - Containers. Jan 13 20:44:56.378491 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:44:56.378508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:56.378528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:44:56.378546 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:44:56.378586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:56.378604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:44:56.378621 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:56.378643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:44:56.378660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:56.378677 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:44:56.378694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:44:56.378711 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:44:56.378728 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:44:56.378746 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:44:56.378763 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:44:56.378781 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:44:56.378801 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:44:56.378818 kernel: fuse: init (API version 7.39) Jan 13 20:44:56.378837 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:44:56.378878 systemd-journald[1124]: Collecting audit messages is disabled. Jan 13 20:44:56.378908 kernel: loop: module loaded Jan 13 20:44:56.378924 systemd-journald[1124]: Journal started Jan 13 20:44:56.378959 systemd-journald[1124]: Runtime Journal (/run/log/journal/37a8838270e9433093dc61e5fcfb1e95) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:44:56.146303 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:44:56.165204 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:44:56.165848 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:44:56.383662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:44:56.385630 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:44:56.385669 systemd[1]: Stopped verity-setup.service. Jan 13 20:44:56.389687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:56.392784 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:44:56.394547 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:44:56.396040 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:44:56.397365 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:44:56.398729 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:44:56.400085 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:44:56.401453 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:44:56.402999 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:56.404749 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:44:56.404980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:44:56.406885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:56.407206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:56.408884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:56.409107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:56.410887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:44:56.411096 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:44:56.412552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:56.412769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:56.414271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:56.416044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:44:56.417656 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:44:56.427582 kernel: ACPI: bus type drm_connector registered Jan 13 20:44:56.428340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:44:56.429359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:44:56.437327 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:44:56.474749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:44:56.477794 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:44:56.479005 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:44:56.479051 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:44:56.481120 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:44:56.483685 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:44:56.490518 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:44:56.492231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:56.495672 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:44:56.497927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:44:56.499201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:44:56.504033 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:44:56.505714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:44:56.507359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:44:56.512102 systemd-journald[1124]: Time spent on flushing to /var/log/journal/37a8838270e9433093dc61e5fcfb1e95 is 40.137ms for 1039 entries. Jan 13 20:44:56.512102 systemd-journald[1124]: System Journal (/var/log/journal/37a8838270e9433093dc61e5fcfb1e95) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:44:56.576644 systemd-journald[1124]: Received client request to flush runtime journal. Jan 13 20:44:56.576686 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:44:56.511162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:44:56.515957 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:56.517655 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:44:56.518241 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:44:56.521687 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:44:56.541818 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:44:56.545009 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:44:56.549605 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:44:56.562095 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:44:56.571151 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:44:56.573887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:56.582516 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:44:56.584580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:44:56.593982 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:44:56.594855 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:44:56.606268 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:44:56.609993 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:44:56.627115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:44:56.629689 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 20:44:56.633777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:44:56.663319 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 13 20:44:56.663781 kernel: loop2: detected capacity change from 0 to 140992 Jan 13 20:44:56.663348 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 13 20:44:56.671970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:56.710582 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:44:56.723606 kernel: loop4: detected capacity change from 0 to 211296 Jan 13 20:44:56.734593 kernel: loop5: detected capacity change from 0 to 140992 Jan 13 20:44:56.745113 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:44:56.745904 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 13 20:44:56.750376 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:44:56.750394 systemd[1]: Reloading... Jan 13 20:44:56.822705 zram_generator::config[1229]: No configuration found. Jan 13 20:44:56.864508 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:44:56.949808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:44:57.001023 systemd[1]: Reloading finished in 250 ms. Jan 13 20:44:57.038629 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:44:57.040313 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:44:57.055726 systemd[1]: Starting ensure-sysext.service... Jan 13 20:44:57.058044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:44:57.067645 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:44:57.067661 systemd[1]: Reloading... Jan 13 20:44:57.115666 zram_generator::config[1291]: No configuration found. Jan 13 20:44:57.111459 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:44:57.111949 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:44:57.113222 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:44:57.116303 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:44:57.116455 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 13 20:44:57.125434 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:44:57.125700 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:44:57.139366 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:44:57.139511 systemd-tmpfiles[1264]: Skipping /boot Jan 13 20:44:57.230323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:44:57.279541 systemd[1]: Reloading finished in 211 ms. Jan 13 20:44:57.300130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:44:57.312139 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:57.319423 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:44:57.322042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:44:57.324767 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:44:57.328793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:44:57.333441 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:57.336750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:44:57.342930 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.343108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:57.344404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:57.347134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:57.349797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:57.350964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:57.351055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.356221 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:44:57.358189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:57.358677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:57.361223 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:57.361512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:57.368413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:57.369577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:57.375374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.376775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:57.384875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:57.388708 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:57.390967 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 13 20:44:57.392853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:57.394109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:57.394268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.395867 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:44:57.398172 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:44:57.400097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:57.400738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:57.403815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:57.405287 augenrules[1365]: No rules Jan 13 20:44:57.405741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:57.408338 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:44:57.408746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:44:57.410758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:57.411004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:57.417086 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:44:57.433832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:57.436998 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:44:57.441967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.450817 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:44:57.452019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:57.453801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:57.462224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:44:57.472811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:57.477788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:57.479016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:57.483534 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:44:57.487498 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:44:57.488669 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:44:57.488771 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:57.492282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:57.492536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:57.494540 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:44:57.494896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:44:57.496869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:57.497056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:57.497145 augenrules[1394]: /sbin/augenrules: No change Jan 13 20:44:57.510153 systemd[1]: Finished ensure-sysext.service. Jan 13 20:44:57.521098 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1393) Jan 13 20:44:57.521211 augenrules[1427]: No rules Jan 13 20:44:57.515923 systemd-resolved[1333]: Positive Trust Anchors: Jan 13 20:44:57.515934 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:44:57.515966 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:44:57.517294 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:44:57.520533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:57.522136 systemd-resolved[1333]: Defaulting to hostname 'linux'. Jan 13 20:44:57.522661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:57.525169 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:44:57.526386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:44:57.527738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:44:57.552009 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:44:57.570090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:44:57.571824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:57.582183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:44:57.584501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:44:57.584610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:44:57.587999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:44:57.601591 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:44:57.604911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:44:57.606626 systemd-networkd[1414]: lo: Link UP Jan 13 20:44:57.606665 systemd-networkd[1414]: lo: Gained carrier Jan 13 20:44:57.611590 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:44:57.614362 systemd-networkd[1414]: Enumeration completed Jan 13 20:44:57.614839 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:57.614843 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:44:57.615468 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:44:57.616697 systemd[1]: Reached target network.target - Network. Jan 13 20:44:57.619067 systemd-networkd[1414]: eth0: Link UP Jan 13 20:44:57.619078 systemd-networkd[1414]: eth0: Gained carrier Jan 13 20:44:57.619098 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:57.629304 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:44:57.632639 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:44:57.636969 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 20:44:57.644201 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:44:57.644878 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:44:57.645146 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:44:57.637199 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:44:57.673735 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:44:57.675721 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:44:59.385593 systemd-timesyncd[1444]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:44:59.385692 systemd-timesyncd[1444]: Initial clock synchronization to Mon 2025-01-13 20:44:59.385399 UTC. Jan 13 20:44:59.385739 systemd-resolved[1333]: Clock change detected. Flushing caches. Jan 13 20:44:59.402602 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:44:59.400718 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:59.414612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:59.414942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:59.465307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:59.489585 kernel: kvm_amd: TSC scaling supported Jan 13 20:44:59.489649 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:44:59.489663 kernel: kvm_amd: Nested Paging enabled Jan 13 20:44:59.490801 kernel: kvm_amd: LBR virtualization supported Jan 13 20:44:59.490826 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:44:59.492487 kernel: kvm_amd: Virtual GIF supported Jan 13 20:44:59.512505 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:44:59.532559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:59.545735 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:44:59.558602 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:44:59.569192 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:44:59.598032 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:44:59.599829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:59.601164 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:44:59.602532 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:44:59.604020 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:44:59.605684 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:44:59.607115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:44:59.608604 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:44:59.610107 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:44:59.610140 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:44:59.611217 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:44:59.612902 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:44:59.615870 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:44:59.627978 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:44:59.630983 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:44:59.632728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:44:59.633994 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:44:59.635005 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:44:59.636106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:44:59.636136 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:44:59.637115 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:44:59.639263 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:44:59.643596 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:44:59.647227 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:44:59.648343 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:44:59.650196 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:44:59.659480 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:44:59.659764 jq[1473]: false Jan 13 20:44:59.654602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:44:59.660349 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:44:59.664171 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:44:59.671589 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:44:59.673225 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:44:59.675266 dbus-daemon[1472]: [system] SELinux support is enabled Jan 13 20:44:59.673783 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:44:59.674610 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:44:59.677356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:44:59.679875 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:44:59.685840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:44:59.686073 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:44:59.686420 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:44:59.686644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:44:59.695306 jq[1487]: true Jan 13 20:44:59.699084 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:44:59.700557 extend-filesystems[1474]: Found loop3 Jan 13 20:44:59.700557 extend-filesystems[1474]: Found loop4 Jan 13 20:44:59.700557 extend-filesystems[1474]: Found loop5 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found sr0 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda1 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda2 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda3 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found usr Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda4 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda6 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda7 Jan 13 20:44:59.714510 extend-filesystems[1474]: Found vda9 Jan 13 20:44:59.714510 extend-filesystems[1474]: Checking size of /dev/vda9 Jan 13 20:44:59.727396 update_engine[1486]: I20250113 20:44:59.707361 1486 main.cc:92] Flatcar Update Engine starting Jan 13 20:44:59.727396 update_engine[1486]: I20250113 20:44:59.716881 1486 update_check_scheduler.cc:74] Next update check in 2m9s Jan 13 20:44:59.701212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:44:59.732676 extend-filesystems[1474]: Resized partition /dev/vda9 Jan 13 20:44:59.735533 tar[1490]: linux-amd64/helm Jan 13 20:44:59.701493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:44:59.736088 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:44:59.705887 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:44:59.738580 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:44:59.738602 jq[1495]: true Jan 13 20:44:59.714630 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:44:59.714670 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:44:59.719849 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:44:59.719866 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:44:59.728071 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:44:59.731577 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:44:59.746589 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1393) Jan 13 20:44:59.783178 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:44:59.804930 systemd-logind[1485]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:44:59.804963 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:44:59.806694 systemd-logind[1485]: New seat seat0. Jan 13 20:44:59.807563 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:44:59.807563 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:44:59.807563 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:44:59.817297 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jan 13 20:44:59.808888 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:44:59.809437 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:44:59.816097 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:44:59.818499 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:44:59.821789 bash[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:44:59.823785 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:44:59.825865 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:44:59.934141 containerd[1494]: time="2025-01-13T20:44:59.933966484Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:44:59.961817 containerd[1494]: time="2025-01-13T20:44:59.961640242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.963647897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.963694404Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.963718659Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.963939644Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.963959932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964058026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964074957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964322872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964342339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964359150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964488 containerd[1494]: time="2025-01-13T20:44:59.964371503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964784 containerd[1494]: time="2025-01-13T20:44:59.964516936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.964853 containerd[1494]: time="2025-01-13T20:44:59.964821257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:59.965032 containerd[1494]: time="2025-01-13T20:44:59.965000373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:59.965032 containerd[1494]: time="2025-01-13T20:44:59.965023376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:44:59.965173 containerd[1494]: time="2025-01-13T20:44:59.965145755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:44:59.965240 containerd[1494]: time="2025-01-13T20:44:59.965221367Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:44:59.970814 containerd[1494]: time="2025-01-13T20:44:59.970777234Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:44:59.970901 containerd[1494]: time="2025-01-13T20:44:59.970819543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:44:59.970901 containerd[1494]: time="2025-01-13T20:44:59.970838429Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:44:59.970901 containerd[1494]: time="2025-01-13T20:44:59.970852996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:44:59.970901 containerd[1494]: time="2025-01-13T20:44:59.970868285Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:44:59.971068 containerd[1494]: time="2025-01-13T20:44:59.971039837Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:44:59.971321 containerd[1494]: time="2025-01-13T20:44:59.971293462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:44:59.971434 containerd[1494]: time="2025-01-13T20:44:59.971408017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:44:59.971434 containerd[1494]: time="2025-01-13T20:44:59.971431942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971445798Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971474131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971487005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971498877Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971511671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971532 containerd[1494]: time="2025-01-13T20:44:59.971530296Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971543591Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971556044Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971568097Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971587303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971601530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971612300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971624954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971636926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971649670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971661272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971673515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971685718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971700836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.971704 containerd[1494]: time="2025-01-13T20:44:59.971713540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971726123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971741222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971755198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971773001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971787098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972099 containerd[1494]: time="2025-01-13T20:44:59.971798128Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:44:59.972575 containerd[1494]: time="2025-01-13T20:44:59.972544909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:44:59.972575 containerd[1494]: time="2025-01-13T20:44:59.972572380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:44:59.972665 containerd[1494]: time="2025-01-13T20:44:59.972582820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:44:59.972665 containerd[1494]: time="2025-01-13T20:44:59.972595644Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:44:59.972665 containerd[1494]: time="2025-01-13T20:44:59.972639706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972665 containerd[1494]: time="2025-01-13T20:44:59.972652661Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:44:59.972665 containerd[1494]: time="2025-01-13T20:44:59.972663170Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:44:59.972802 containerd[1494]: time="2025-01-13T20:44:59.972673750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:44:59.972981 containerd[1494]: time="2025-01-13T20:44:59.972922517Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:44:59.972981 containerd[1494]: time="2025-01-13T20:44:59.972976858Z" level=info msg="Connect containerd service" Jan 13 20:44:59.973200 containerd[1494]: time="2025-01-13T20:44:59.973016433Z" level=info msg="using legacy CRI server" Jan 13 20:44:59.973200 containerd[1494]: time="2025-01-13T20:44:59.973025159Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:44:59.973200 containerd[1494]: time="2025-01-13T20:44:59.973142479Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:44:59.973785 containerd[1494]: time="2025-01-13T20:44:59.973753815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:44:59.974048 containerd[1494]: time="2025-01-13T20:44:59.974001680Z" level=info msg="Start subscribing containerd event" Jan 13 20:44:59.974090 containerd[1494]: time="2025-01-13T20:44:59.974051383Z" level=info msg="Start recovering state" Jan 13 20:44:59.974121 containerd[1494]: time="2025-01-13T20:44:59.974107799Z" level=info msg="Start event monitor" Jan 13 20:44:59.974152 containerd[1494]: time="2025-01-13T20:44:59.974123969Z" level=info msg="Start snapshots syncer" Jan 13 20:44:59.974152 containerd[1494]: time="2025-01-13T20:44:59.974133547Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:44:59.974152 containerd[1494]: time="2025-01-13T20:44:59.974141202Z" level=info msg="Start streaming server" Jan 13 20:44:59.975780 containerd[1494]: time="2025-01-13T20:44:59.975739007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:44:59.978162 containerd[1494]: time="2025-01-13T20:44:59.975926840Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:44:59.978162 containerd[1494]: time="2025-01-13T20:44:59.976056703Z" level=info msg="containerd successfully booted in 0.043490s" Jan 13 20:44:59.976186 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:45:00.105839 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:45:00.133688 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:45:00.142738 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:45:00.144893 tar[1490]: linux-amd64/LICENSE Jan 13 20:45:00.145021 tar[1490]: linux-amd64/README.md Jan 13 20:45:00.151552 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:45:00.151831 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:45:00.154941 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:45:00.157644 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:45:00.170478 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:45:00.181788 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:45:00.196694 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:45:00.198436 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:45:00.439720 systemd-networkd[1414]: eth0: Gained IPv6LL Jan 13 20:45:00.443906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:45:00.446023 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:45:00.454802 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:45:00.457715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:00.460484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:45:00.482362 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:45:00.482719 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:45:00.484787 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:45:00.489308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:45:01.102722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:01.104689 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:45:01.108569 systemd[1]: Startup finished in 1.438s (kernel) + 5.839s (initrd) + 3.864s (userspace) = 11.143s. Jan 13 20:45:01.108801 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:01.604633 kubelet[1584]: E0113 20:45:01.604529 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:01.610753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:01.610985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:01.611429 systemd[1]: kubelet.service: Consumed 1.034s CPU time. Jan 13 20:45:05.809415 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:45:05.819736 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:53394.service - OpenSSH per-connection server daemon (10.0.0.1:53394). Jan 13 20:45:05.873439 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 53394 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:05.876133 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:05.886743 systemd-logind[1485]: New session 1 of user core. Jan 13 20:45:05.888439 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:45:05.900691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:45:05.913861 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:45:05.927784 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:45:05.931027 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:45:06.046031 systemd[1603]: Queued start job for default target default.target. Jan 13 20:45:06.057810 systemd[1603]: Created slice app.slice - User Application Slice. Jan 13 20:45:06.057837 systemd[1603]: Reached target paths.target - Paths. Jan 13 20:45:06.057859 systemd[1603]: Reached target timers.target - Timers. Jan 13 20:45:06.059594 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:45:06.071809 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:45:06.071955 systemd[1603]: Reached target sockets.target - Sockets. Jan 13 20:45:06.071975 systemd[1603]: Reached target basic.target - Basic System. Jan 13 20:45:06.072014 systemd[1603]: Reached target default.target - Main User Target. Jan 13 20:45:06.072049 systemd[1603]: Startup finished in 133ms. Jan 13 20:45:06.072657 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:45:06.081574 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:45:06.149442 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:53396.service - OpenSSH per-connection server daemon (10.0.0.1:53396). Jan 13 20:45:06.203748 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.205262 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.209202 systemd-logind[1485]: New session 2 of user core. Jan 13 20:45:06.218579 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:45:06.271554 sshd[1616]: Connection closed by 10.0.0.1 port 53396 Jan 13 20:45:06.271889 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:06.281239 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:53396.service: Deactivated successfully. Jan 13 20:45:06.283010 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:45:06.284551 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:45:06.292725 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:53406.service - OpenSSH per-connection server daemon (10.0.0.1:53406). Jan 13 20:45:06.293804 systemd-logind[1485]: Removed session 2. Jan 13 20:45:06.324666 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 53406 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.326069 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.329565 systemd-logind[1485]: New session 3 of user core. Jan 13 20:45:06.338564 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:45:06.388854 sshd[1623]: Connection closed by 10.0.0.1 port 53406 Jan 13 20:45:06.389284 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:06.401722 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:53406.service: Deactivated successfully. Jan 13 20:45:06.403667 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:45:06.405512 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:45:06.411884 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:53416.service - OpenSSH per-connection server daemon (10.0.0.1:53416). Jan 13 20:45:06.413032 systemd-logind[1485]: Removed session 3. Jan 13 20:45:06.444266 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 53416 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.445882 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.449912 systemd-logind[1485]: New session 4 of user core. Jan 13 20:45:06.459580 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:45:06.515724 sshd[1630]: Connection closed by 10.0.0.1 port 53416 Jan 13 20:45:06.516583 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:06.526563 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:53416.service: Deactivated successfully. Jan 13 20:45:06.528509 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:45:06.530270 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:45:06.538689 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Jan 13 20:45:06.539685 systemd-logind[1485]: Removed session 4. Jan 13 20:45:06.571662 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.573242 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.577081 systemd-logind[1485]: New session 5 of user core. Jan 13 20:45:06.586573 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:45:06.644962 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:45:06.645321 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:06.666763 sudo[1638]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:06.668260 sshd[1637]: Connection closed by 10.0.0.1 port 53430 Jan 13 20:45:06.668729 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:06.688381 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:53430.service: Deactivated successfully. Jan 13 20:45:06.690589 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:45:06.692094 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:45:06.693626 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:53434.service - OpenSSH per-connection server daemon (10.0.0.1:53434). Jan 13 20:45:06.694515 systemd-logind[1485]: Removed session 5. Jan 13 20:45:06.730433 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 53434 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.731906 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.735914 systemd-logind[1485]: New session 6 of user core. Jan 13 20:45:06.756566 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:45:06.811189 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:45:06.811643 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:06.815590 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:06.822053 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:45:06.822390 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:06.842733 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:06.874097 augenrules[1669]: No rules Jan 13 20:45:06.876126 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:06.876386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:06.877752 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:06.879263 sshd[1645]: Connection closed by 10.0.0.1 port 53434 Jan 13 20:45:06.879705 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:06.890144 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:53434.service: Deactivated successfully. Jan 13 20:45:06.891949 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:45:06.893786 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:45:06.905829 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:53442.service - OpenSSH per-connection server daemon (10.0.0.1:53442). Jan 13 20:45:06.906778 systemd-logind[1485]: Removed session 6. Jan 13 20:45:06.938566 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 53442 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:45:06.940036 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:06.943908 systemd-logind[1485]: New session 7 of user core. Jan 13 20:45:06.959676 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:45:07.014439 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:45:07.014831 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:07.295706 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:45:07.295872 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:45:07.571695 dockerd[1700]: time="2025-01-13T20:45:07.571532398Z" level=info msg="Starting up" Jan 13 20:45:07.940600 dockerd[1700]: time="2025-01-13T20:45:07.940488992Z" level=info msg="Loading containers: start." Jan 13 20:45:08.124479 kernel: Initializing XFRM netlink socket Jan 13 20:45:08.228750 systemd-networkd[1414]: docker0: Link UP Jan 13 20:45:08.273103 dockerd[1700]: time="2025-01-13T20:45:08.273036819Z" level=info msg="Loading containers: done." Jan 13 20:45:08.288606 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3274463560-merged.mount: Deactivated successfully. Jan 13 20:45:08.291186 dockerd[1700]: time="2025-01-13T20:45:08.291118271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:45:08.291281 dockerd[1700]: time="2025-01-13T20:45:08.291241903Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:45:08.291422 dockerd[1700]: time="2025-01-13T20:45:08.291387897Z" level=info msg="Daemon has completed initialization" Jan 13 20:45:08.335018 dockerd[1700]: time="2025-01-13T20:45:08.334913077Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:45:08.335232 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:45:09.053722 containerd[1494]: time="2025-01-13T20:45:09.053668503Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:45:09.726228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665635342.mount: Deactivated successfully. Jan 13 20:45:10.749528 containerd[1494]: time="2025-01-13T20:45:10.749442348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:10.750382 containerd[1494]: time="2025-01-13T20:45:10.750308402Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:45:10.751859 containerd[1494]: time="2025-01-13T20:45:10.751808986Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:10.754751 containerd[1494]: time="2025-01-13T20:45:10.754708943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:10.755666 containerd[1494]: time="2025-01-13T20:45:10.755615413Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.701902406s" Jan 13 20:45:10.755666 containerd[1494]: time="2025-01-13T20:45:10.755651501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:45:10.776948 containerd[1494]: time="2025-01-13T20:45:10.776900202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:45:11.861236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:11.872791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:12.024111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:12.029585 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:12.084110 kubelet[1980]: E0113 20:45:12.083927 1980 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:12.092243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:12.092500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:13.159326 containerd[1494]: time="2025-01-13T20:45:13.159254966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.160295 containerd[1494]: time="2025-01-13T20:45:13.160232750Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:45:13.161383 containerd[1494]: time="2025-01-13T20:45:13.161356336Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.164089 containerd[1494]: time="2025-01-13T20:45:13.164056449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.165151 containerd[1494]: time="2025-01-13T20:45:13.165099826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.388153006s" Jan 13 20:45:13.165151 containerd[1494]: time="2025-01-13T20:45:13.165148346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:45:13.187387 containerd[1494]: time="2025-01-13T20:45:13.187341749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:45:15.270799 containerd[1494]: time="2025-01-13T20:45:15.270723319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:15.276325 containerd[1494]: time="2025-01-13T20:45:15.276274988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:45:15.297102 containerd[1494]: time="2025-01-13T20:45:15.297053177Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:15.315535 containerd[1494]: time="2025-01-13T20:45:15.315497541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:15.316462 containerd[1494]: time="2025-01-13T20:45:15.316420451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.129041021s" Jan 13 20:45:15.316537 containerd[1494]: time="2025-01-13T20:45:15.316463592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:45:15.339206 containerd[1494]: time="2025-01-13T20:45:15.339017851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:45:16.449344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140732578.mount: Deactivated successfully. Jan 13 20:45:17.025847 containerd[1494]: time="2025-01-13T20:45:17.025781583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:17.026605 containerd[1494]: time="2025-01-13T20:45:17.026562136Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:45:17.027742 containerd[1494]: time="2025-01-13T20:45:17.027701433Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:17.029893 containerd[1494]: time="2025-01-13T20:45:17.029865490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:17.030531 containerd[1494]: time="2025-01-13T20:45:17.030473690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.691423488s" Jan 13 20:45:17.030580 containerd[1494]: time="2025-01-13T20:45:17.030530487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:45:17.051796 containerd[1494]: time="2025-01-13T20:45:17.051743952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:45:17.575292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008257022.mount: Deactivated successfully. Jan 13 20:45:18.525649 containerd[1494]: time="2025-01-13T20:45:18.525581197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.526525 containerd[1494]: time="2025-01-13T20:45:18.526434426Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:45:18.527768 containerd[1494]: time="2025-01-13T20:45:18.527724095Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.530388 containerd[1494]: time="2025-01-13T20:45:18.530340210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.531420 containerd[1494]: time="2025-01-13T20:45:18.531390529Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.479602595s" Jan 13 20:45:18.531420 containerd[1494]: time="2025-01-13T20:45:18.531418452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:45:18.555113 containerd[1494]: time="2025-01-13T20:45:18.555052696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:45:19.047695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265450720.mount: Deactivated successfully. Jan 13 20:45:19.053566 containerd[1494]: time="2025-01-13T20:45:19.053512287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:19.054297 containerd[1494]: time="2025-01-13T20:45:19.054235644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:45:19.055500 containerd[1494]: time="2025-01-13T20:45:19.055440072Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:19.058941 containerd[1494]: time="2025-01-13T20:45:19.058719942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:19.059688 containerd[1494]: time="2025-01-13T20:45:19.059651789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 504.55489ms" Jan 13 20:45:19.059688 containerd[1494]: time="2025-01-13T20:45:19.059683007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:45:19.083186 containerd[1494]: time="2025-01-13T20:45:19.083137444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:45:19.630907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978061406.mount: Deactivated successfully. Jan 13 20:45:21.226936 containerd[1494]: time="2025-01-13T20:45:21.226843713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:21.227884 containerd[1494]: time="2025-01-13T20:45:21.227832958Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:45:21.229256 containerd[1494]: time="2025-01-13T20:45:21.229223345Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:21.232761 containerd[1494]: time="2025-01-13T20:45:21.232724480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:21.234196 containerd[1494]: time="2025-01-13T20:45:21.234146255Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.150968575s" Jan 13 20:45:21.234196 containerd[1494]: time="2025-01-13T20:45:21.234179438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:45:22.342720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:45:22.351680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:22.495793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:22.500590 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:22.542090 kubelet[2213]: E0113 20:45:22.542005 2213 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:22.547619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:22.547839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:23.146021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:23.159701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:23.177679 systemd[1]: Reloading requested from client PID 2228 ('systemctl') (unit session-7.scope)... Jan 13 20:45:23.177695 systemd[1]: Reloading... Jan 13 20:45:23.263483 zram_generator::config[2271]: No configuration found. Jan 13 20:45:23.430837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:23.506213 systemd[1]: Reloading finished in 328 ms. Jan 13 20:45:23.562635 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:23.567227 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:45:23.567503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:23.569152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:23.728978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:23.734821 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:23.781119 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:23.781660 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:23.781660 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:23.781875 kubelet[2317]: I0113 20:45:23.781623 2317 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:24.034025 kubelet[2317]: I0113 20:45:24.033885 2317 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:24.034025 kubelet[2317]: I0113 20:45:24.033933 2317 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:24.034851 kubelet[2317]: I0113 20:45:24.034329 2317 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:24.051159 kubelet[2317]: E0113 20:45:24.051132 2317 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.051828 kubelet[2317]: I0113 20:45:24.051772 2317 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:24.063683 kubelet[2317]: I0113 20:45:24.063645 2317 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:24.064000 kubelet[2317]: I0113 20:45:24.063971 2317 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:24.064164 kubelet[2317]: I0113 20:45:24.064132 2317 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:24.064263 kubelet[2317]: I0113 20:45:24.064166 2317 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:24.064263 kubelet[2317]: I0113 20:45:24.064177 2317 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:24.064311 kubelet[2317]: I0113 20:45:24.064297 2317 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:24.064430 kubelet[2317]: I0113 20:45:24.064394 2317 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:24.064430 kubelet[2317]: I0113 20:45:24.064424 2317 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:24.064499 kubelet[2317]: I0113 20:45:24.064484 2317 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:24.064524 kubelet[2317]: I0113 20:45:24.064511 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:24.066665 kubelet[2317]: I0113 20:45:24.066182 2317 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:24.066754 kubelet[2317]: W0113 20:45:24.066645 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.066754 kubelet[2317]: E0113 20:45:24.066702 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.067047 kubelet[2317]: W0113 20:45:24.067008 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.067097 kubelet[2317]: E0113 20:45:24.067053 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.069746 kubelet[2317]: I0113 20:45:24.069378 2317 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:24.070988 kubelet[2317]: W0113 20:45:24.070653 2317 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:45:24.071887 kubelet[2317]: I0113 20:45:24.071566 2317 server.go:1256] "Started kubelet" Jan 13 20:45:24.074224 kubelet[2317]: I0113 20:45:24.074195 2317 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:24.075511 kubelet[2317]: I0113 20:45:24.075488 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:24.075615 kubelet[2317]: I0113 20:45:24.075491 2317 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:24.077055 kubelet[2317]: I0113 20:45:24.077008 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:24.077441 kubelet[2317]: I0113 20:45:24.077421 2317 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:24.079961 kubelet[2317]: I0113 20:45:24.079051 2317 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:24.079961 kubelet[2317]: I0113 20:45:24.079717 2317 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:24.079961 kubelet[2317]: I0113 20:45:24.079800 2317 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:24.080059 kubelet[2317]: I0113 20:45:24.080041 2317 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:24.080132 kubelet[2317]: I0113 20:45:24.080106 2317 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:24.080555 kubelet[2317]: W0113 20:45:24.080508 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.080604 kubelet[2317]: E0113 20:45:24.080563 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.080686 kubelet[2317]: E0113 20:45:24.080655 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jan 13 20:45:24.080793 kubelet[2317]: E0113 20:45:24.080762 2317 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b6b9dc8733b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:45:24.071535419 +0000 UTC m=+0.332411612,LastTimestamp:2025-01-13 20:45:24.071535419 +0000 UTC m=+0.332411612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:45:24.085123 kubelet[2317]: I0113 20:45:24.084522 2317 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:24.095730 kubelet[2317]: I0113 20:45:24.095695 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:24.095904 kubelet[2317]: E0113 20:45:24.095854 2317 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:24.097911 kubelet[2317]: I0113 20:45:24.097890 2317 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:24.098026 kubelet[2317]: I0113 20:45:24.098010 2317 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:24.098102 kubelet[2317]: I0113 20:45:24.098088 2317 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:24.098247 kubelet[2317]: E0113 20:45:24.098226 2317 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:24.098994 kubelet[2317]: W0113 20:45:24.098916 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.099155 kubelet[2317]: E0113 20:45:24.099137 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.103845 kubelet[2317]: I0113 20:45:24.103803 2317 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:24.103845 kubelet[2317]: I0113 20:45:24.103824 2317 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:24.103845 kubelet[2317]: I0113 20:45:24.103838 2317 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:24.108441 kubelet[2317]: I0113 20:45:24.108409 2317 policy_none.go:49] "None policy: Start" Jan 13 20:45:24.109046 kubelet[2317]: I0113 20:45:24.109018 2317 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:24.109095 kubelet[2317]: I0113 20:45:24.109053 2317 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:24.116475 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:45:24.130021 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:45:24.133093 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:45:24.140372 kubelet[2317]: I0113 20:45:24.140333 2317 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:24.140723 kubelet[2317]: I0113 20:45:24.140701 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:24.142427 kubelet[2317]: E0113 20:45:24.142368 2317 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:45:24.180655 kubelet[2317]: I0113 20:45:24.180600 2317 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:24.181133 kubelet[2317]: E0113 20:45:24.181094 2317 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 20:45:24.199316 kubelet[2317]: I0113 20:45:24.199242 2317 topology_manager.go:215] "Topology Admit Handler" podUID="32a3b155dce86689d1c7c26994c46eb3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:45:24.200443 kubelet[2317]: I0113 20:45:24.200413 2317 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:45:24.201271 kubelet[2317]: I0113 20:45:24.201231 2317 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:45:24.206690 systemd[1]: Created slice kubepods-burstable-pod32a3b155dce86689d1c7c26994c46eb3.slice - libcontainer container kubepods-burstable-pod32a3b155dce86689d1c7c26994c46eb3.slice. Jan 13 20:45:24.244428 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:45:24.265434 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:45:24.280361 kubelet[2317]: I0113 20:45:24.280322 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:24.280361 kubelet[2317]: I0113 20:45:24.280359 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:24.280507 kubelet[2317]: I0113 20:45:24.280379 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:24.280507 kubelet[2317]: I0113 20:45:24.280409 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:24.280507 kubelet[2317]: I0113 20:45:24.280478 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:45:24.280581 kubelet[2317]: I0113 20:45:24.280514 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:24.280581 kubelet[2317]: I0113 20:45:24.280540 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:24.280625 kubelet[2317]: I0113 20:45:24.280587 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:24.280662 kubelet[2317]: I0113 20:45:24.280643 2317 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:24.281489 kubelet[2317]: E0113 20:45:24.281470 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jan 13 20:45:24.382918 kubelet[2317]: I0113 20:45:24.382827 2317 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:24.383256 kubelet[2317]: E0113 20:45:24.383214 2317 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 20:45:24.542013 kubelet[2317]: E0113 20:45:24.541971 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:24.542665 containerd[1494]: time="2025-01-13T20:45:24.542625724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32a3b155dce86689d1c7c26994c46eb3,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:24.563249 kubelet[2317]: E0113 20:45:24.563188 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:24.563654 containerd[1494]: time="2025-01-13T20:45:24.563604809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:24.568171 kubelet[2317]: E0113 20:45:24.568131 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:24.568706 containerd[1494]: time="2025-01-13T20:45:24.568663114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:24.682187 kubelet[2317]: E0113 20:45:24.682059 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jan 13 20:45:24.784800 kubelet[2317]: I0113 20:45:24.784766 2317 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:24.785205 kubelet[2317]: E0113 20:45:24.785186 2317 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 20:45:24.926512 kubelet[2317]: W0113 20:45:24.926373 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:24.926512 kubelet[2317]: E0113 20:45:24.926502 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.059736 kubelet[2317]: W0113 20:45:25.059592 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.059736 kubelet[2317]: E0113 20:45:25.059657 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.152231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497151523.mount: Deactivated successfully. Jan 13 20:45:25.163976 containerd[1494]: time="2025-01-13T20:45:25.163906064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:25.168509 containerd[1494]: time="2025-01-13T20:45:25.168429415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:45:25.179552 containerd[1494]: time="2025-01-13T20:45:25.179477641Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:25.181094 containerd[1494]: time="2025-01-13T20:45:25.181040080Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:25.186398 containerd[1494]: time="2025-01-13T20:45:25.186301946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:25.191433 containerd[1494]: time="2025-01-13T20:45:25.191367304Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:25.192358 containerd[1494]: time="2025-01-13T20:45:25.192285546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:25.197256 containerd[1494]: time="2025-01-13T20:45:25.197197996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:25.199495 containerd[1494]: time="2025-01-13T20:45:25.199434520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.652493ms" Jan 13 20:45:25.200118 containerd[1494]: time="2025-01-13T20:45:25.200079449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.356804ms" Jan 13 20:45:25.211824 containerd[1494]: time="2025-01-13T20:45:25.211761633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.067186ms" Jan 13 20:45:25.357855 containerd[1494]: time="2025-01-13T20:45:25.357197946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.357855 containerd[1494]: time="2025-01-13T20:45:25.357270763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.357855 containerd[1494]: time="2025-01-13T20:45:25.357283767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.358139 containerd[1494]: time="2025-01-13T20:45:25.358029826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.359604 containerd[1494]: time="2025-01-13T20:45:25.359425493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.359704 containerd[1494]: time="2025-01-13T20:45:25.359679900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.359782 containerd[1494]: time="2025-01-13T20:45:25.359761793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.359983 containerd[1494]: time="2025-01-13T20:45:25.359904972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.360386 containerd[1494]: time="2025-01-13T20:45:25.360129713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.360386 containerd[1494]: time="2025-01-13T20:45:25.360189295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.360386 containerd[1494]: time="2025-01-13T20:45:25.360241693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.360386 containerd[1494]: time="2025-01-13T20:45:25.360307236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.381591 systemd[1]: Started cri-containerd-3ee8b7f36f9d69805235c2be9eebed2dead756d0615d0d29d90279143240ad32.scope - libcontainer container 3ee8b7f36f9d69805235c2be9eebed2dead756d0615d0d29d90279143240ad32. Jan 13 20:45:25.385568 systemd[1]: Started cri-containerd-4248b6735103547473f4cab985fd5dffc4dee90b5aa1ea2f71a634e382301664.scope - libcontainer container 4248b6735103547473f4cab985fd5dffc4dee90b5aa1ea2f71a634e382301664. Jan 13 20:45:25.387287 systemd[1]: Started cri-containerd-4693ac3204b904b2bd364c66f91c130d4284dab13acbf6e289064d4b4eba3911.scope - libcontainer container 4693ac3204b904b2bd364c66f91c130d4284dab13acbf6e289064d4b4eba3911. Jan 13 20:45:25.427113 containerd[1494]: time="2025-01-13T20:45:25.427046492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"4693ac3204b904b2bd364c66f91c130d4284dab13acbf6e289064d4b4eba3911\"" Jan 13 20:45:25.427402 containerd[1494]: time="2025-01-13T20:45:25.427358567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4248b6735103547473f4cab985fd5dffc4dee90b5aa1ea2f71a634e382301664\"" Jan 13 20:45:25.428752 kubelet[2317]: E0113 20:45:25.428716 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.428752 kubelet[2317]: E0113 20:45:25.428716 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.429306 containerd[1494]: time="2025-01-13T20:45:25.429270142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32a3b155dce86689d1c7c26994c46eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ee8b7f36f9d69805235c2be9eebed2dead756d0615d0d29d90279143240ad32\"" Jan 13 20:45:25.432644 kubelet[2317]: E0113 20:45:25.432607 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.435163 containerd[1494]: time="2025-01-13T20:45:25.435116283Z" level=info msg="CreateContainer within sandbox \"4248b6735103547473f4cab985fd5dffc4dee90b5aa1ea2f71a634e382301664\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:45:25.435271 containerd[1494]: time="2025-01-13T20:45:25.435118397Z" level=info msg="CreateContainer within sandbox \"4693ac3204b904b2bd364c66f91c130d4284dab13acbf6e289064d4b4eba3911\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:45:25.437516 containerd[1494]: time="2025-01-13T20:45:25.437411677Z" level=info msg="CreateContainer within sandbox \"3ee8b7f36f9d69805235c2be9eebed2dead756d0615d0d29d90279143240ad32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:45:25.483539 kubelet[2317]: E0113 20:45:25.483479 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Jan 13 20:45:25.587198 kubelet[2317]: I0113 20:45:25.587151 2317 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:25.587612 kubelet[2317]: E0113 20:45:25.587578 2317 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 13 20:45:25.596225 kubelet[2317]: W0113 20:45:25.596155 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.596225 kubelet[2317]: E0113 20:45:25.596223 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.661125 kubelet[2317]: W0113 20:45:25.660939 2317 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.661125 kubelet[2317]: E0113 20:45:25.661018 2317 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:25.986954 containerd[1494]: time="2025-01-13T20:45:25.986777017Z" level=info msg="CreateContainer within sandbox \"4693ac3204b904b2bd364c66f91c130d4284dab13acbf6e289064d4b4eba3911\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"febeb3de896ca24821c3612e08df28911436684406229fbec7b63bf24efe409c\"" Jan 13 20:45:25.987564 containerd[1494]: time="2025-01-13T20:45:25.987523356Z" level=info msg="StartContainer for \"febeb3de896ca24821c3612e08df28911436684406229fbec7b63bf24efe409c\"" Jan 13 20:45:26.016611 systemd[1]: Started cri-containerd-febeb3de896ca24821c3612e08df28911436684406229fbec7b63bf24efe409c.scope - libcontainer container febeb3de896ca24821c3612e08df28911436684406229fbec7b63bf24efe409c. Jan 13 20:45:26.111784 containerd[1494]: time="2025-01-13T20:45:26.111730034Z" level=info msg="CreateContainer within sandbox \"3ee8b7f36f9d69805235c2be9eebed2dead756d0615d0d29d90279143240ad32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bdf5f3358d05c32676e1e297246b005dc8b3cb96cf1f8caa76d2e469bdc5313b\"" Jan 13 20:45:26.111945 containerd[1494]: time="2025-01-13T20:45:26.111751885Z" level=info msg="CreateContainer within sandbox \"4248b6735103547473f4cab985fd5dffc4dee90b5aa1ea2f71a634e382301664\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34c20bb8e464628d5d72e62c250132bfed81b6e13e2802133cc0558b67581c7f\"" Jan 13 20:45:26.111945 containerd[1494]: time="2025-01-13T20:45:26.111757355Z" level=info msg="StartContainer for \"febeb3de896ca24821c3612e08df28911436684406229fbec7b63bf24efe409c\" returns successfully" Jan 13 20:45:26.112578 containerd[1494]: time="2025-01-13T20:45:26.112246111Z" level=info msg="StartContainer for \"bdf5f3358d05c32676e1e297246b005dc8b3cb96cf1f8caa76d2e469bdc5313b\"" Jan 13 20:45:26.114300 containerd[1494]: time="2025-01-13T20:45:26.112882685Z" level=info msg="StartContainer for \"34c20bb8e464628d5d72e62c250132bfed81b6e13e2802133cc0558b67581c7f\"" Jan 13 20:45:26.120372 kubelet[2317]: E0113 20:45:26.120313 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:26.152611 systemd[1]: Started cri-containerd-34c20bb8e464628d5d72e62c250132bfed81b6e13e2802133cc0558b67581c7f.scope - libcontainer container 34c20bb8e464628d5d72e62c250132bfed81b6e13e2802133cc0558b67581c7f. Jan 13 20:45:26.154224 systemd[1]: Started cri-containerd-bdf5f3358d05c32676e1e297246b005dc8b3cb96cf1f8caa76d2e469bdc5313b.scope - libcontainer container bdf5f3358d05c32676e1e297246b005dc8b3cb96cf1f8caa76d2e469bdc5313b. Jan 13 20:45:26.246780 kubelet[2317]: E0113 20:45:26.246610 2317 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.138:6443: connect: connection refused Jan 13 20:45:26.666977 containerd[1494]: time="2025-01-13T20:45:26.666906850Z" level=info msg="StartContainer for \"bdf5f3358d05c32676e1e297246b005dc8b3cb96cf1f8caa76d2e469bdc5313b\" returns successfully" Jan 13 20:45:26.667241 containerd[1494]: time="2025-01-13T20:45:26.666939461Z" level=info msg="StartContainer for \"34c20bb8e464628d5d72e62c250132bfed81b6e13e2802133cc0558b67581c7f\" returns successfully" Jan 13 20:45:27.124845 kubelet[2317]: E0113 20:45:27.124814 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:27.125847 kubelet[2317]: E0113 20:45:27.125823 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:27.189187 kubelet[2317]: I0113 20:45:27.188815 2317 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:27.310624 kubelet[2317]: E0113 20:45:27.310572 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:45:27.492327 kubelet[2317]: I0113 20:45:27.492176 2317 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:45:28.066285 kubelet[2317]: I0113 20:45:28.066228 2317 apiserver.go:52] "Watching apiserver" Jan 13 20:45:28.080439 kubelet[2317]: I0113 20:45:28.080374 2317 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:28.206659 kubelet[2317]: E0113 20:45:28.206614 2317 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:28.207195 kubelet[2317]: E0113 20:45:28.207080 2317 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:28.207195 kubelet[2317]: E0113 20:45:28.207123 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:28.207734 kubelet[2317]: E0113 20:45:28.207709 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:29.257411 kubelet[2317]: E0113 20:45:29.257365 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:30.130591 kubelet[2317]: E0113 20:45:30.130539 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:30.784114 kubelet[2317]: E0113 20:45:30.784071 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:31.131579 kubelet[2317]: E0113 20:45:31.131540 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:32.615195 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-7.scope)... Jan 13 20:45:32.615223 systemd[1]: Reloading... Jan 13 20:45:32.701492 zram_generator::config[2639]: No configuration found. Jan 13 20:45:32.817597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:32.911569 systemd[1]: Reloading finished in 295 ms. Jan 13 20:45:32.958646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:32.971418 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:45:32.971737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:32.983851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:33.130145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:33.136436 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:33.182995 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:33.182995 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:33.182995 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:33.182995 kubelet[2681]: I0113 20:45:33.182629 2681 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:33.187747 kubelet[2681]: I0113 20:45:33.187715 2681 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:33.187747 kubelet[2681]: I0113 20:45:33.187742 2681 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:33.187977 kubelet[2681]: I0113 20:45:33.187952 2681 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:33.189553 kubelet[2681]: I0113 20:45:33.189507 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:45:33.193970 kubelet[2681]: I0113 20:45:33.193784 2681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:33.202021 kubelet[2681]: I0113 20:45:33.201940 2681 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:33.202195 kubelet[2681]: I0113 20:45:33.202163 2681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:33.202377 kubelet[2681]: I0113 20:45:33.202358 2681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:33.202480 kubelet[2681]: I0113 20:45:33.202386 2681 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:33.202480 kubelet[2681]: I0113 20:45:33.202395 2681 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:33.202480 kubelet[2681]: I0113 20:45:33.202428 2681 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:33.202587 kubelet[2681]: I0113 20:45:33.202534 2681 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:33.202587 kubelet[2681]: I0113 20:45:33.202547 2681 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:33.202587 kubelet[2681]: I0113 20:45:33.202572 2681 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:33.202587 kubelet[2681]: I0113 20:45:33.202586 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:33.205580 kubelet[2681]: I0113 20:45:33.203848 2681 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:33.205580 kubelet[2681]: I0113 20:45:33.204006 2681 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:33.205580 kubelet[2681]: I0113 20:45:33.204341 2681 server.go:1256] "Started kubelet" Jan 13 20:45:33.206568 kubelet[2681]: I0113 20:45:33.206540 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:33.208157 kubelet[2681]: E0113 20:45:33.208139 2681 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:33.214578 kubelet[2681]: I0113 20:45:33.214541 2681 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:33.215549 kubelet[2681]: I0113 20:45:33.215524 2681 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:33.216739 kubelet[2681]: I0113 20:45:33.216711 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:33.217522 kubelet[2681]: I0113 20:45:33.217442 2681 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:33.217709 kubelet[2681]: I0113 20:45:33.217690 2681 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:33.217987 kubelet[2681]: I0113 20:45:33.217970 2681 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:33.218109 kubelet[2681]: I0113 20:45:33.218083 2681 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:33.219415 kubelet[2681]: I0113 20:45:33.219353 2681 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:33.221530 kubelet[2681]: I0113 20:45:33.221496 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:33.223163 kubelet[2681]: I0113 20:45:33.223137 2681 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:33.226477 kubelet[2681]: I0113 20:45:33.226415 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:33.228745 kubelet[2681]: I0113 20:45:33.228719 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:33.228745 kubelet[2681]: I0113 20:45:33.228750 2681 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:33.228850 kubelet[2681]: I0113 20:45:33.228780 2681 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:33.228850 kubelet[2681]: E0113 20:45:33.228834 2681 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:33.267792 kubelet[2681]: I0113 20:45:33.267757 2681 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:33.267792 kubelet[2681]: I0113 20:45:33.267783 2681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:33.267792 kubelet[2681]: I0113 20:45:33.267802 2681 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:33.268003 kubelet[2681]: I0113 20:45:33.267967 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:45:33.268003 kubelet[2681]: I0113 20:45:33.267993 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:45:33.268003 kubelet[2681]: I0113 20:45:33.268002 2681 policy_none.go:49] "None policy: Start" Jan 13 20:45:33.268709 kubelet[2681]: I0113 20:45:33.268573 2681 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:33.268709 kubelet[2681]: I0113 20:45:33.268626 2681 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:33.268842 kubelet[2681]: I0113 20:45:33.268822 2681 state_mem.go:75] "Updated machine memory state" Jan 13 20:45:33.273420 kubelet[2681]: I0113 20:45:33.273394 2681 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:33.274157 kubelet[2681]: I0113 20:45:33.273837 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:33.322922 kubelet[2681]: I0113 20:45:33.322867 2681 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:33.328833 kubelet[2681]: I0113 20:45:33.328784 2681 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:45:33.329657 kubelet[2681]: I0113 20:45:33.328887 2681 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:45:33.329657 kubelet[2681]: I0113 20:45:33.329029 2681 topology_manager.go:215] "Topology Admit Handler" podUID="32a3b155dce86689d1c7c26994c46eb3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:45:33.329657 kubelet[2681]: I0113 20:45:33.329108 2681 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:45:33.329657 kubelet[2681]: I0113 20:45:33.329161 2681 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:45:33.338482 kubelet[2681]: E0113 20:45:33.337220 2681 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:33.338482 kubelet[2681]: E0113 20:45:33.337314 2681 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:45:33.519035 kubelet[2681]: I0113 20:45:33.518846 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:33.519035 kubelet[2681]: I0113 20:45:33.518924 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:45:33.519035 kubelet[2681]: I0113 20:45:33.518963 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:33.519035 kubelet[2681]: I0113 20:45:33.518989 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:33.519035 kubelet[2681]: I0113 20:45:33.519028 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:33.519315 kubelet[2681]: I0113 20:45:33.519063 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:33.519315 kubelet[2681]: I0113 20:45:33.519095 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:33.519315 kubelet[2681]: I0113 20:45:33.519121 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:33.519315 kubelet[2681]: I0113 20:45:33.519145 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32a3b155dce86689d1c7c26994c46eb3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32a3b155dce86689d1c7c26994c46eb3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:33.639081 kubelet[2681]: E0113 20:45:33.639030 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:33.640268 kubelet[2681]: E0113 20:45:33.639698 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:33.640268 kubelet[2681]: E0113 20:45:33.640137 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:34.203627 kubelet[2681]: I0113 20:45:34.203584 2681 apiserver.go:52] "Watching apiserver" Jan 13 20:45:34.219559 kubelet[2681]: I0113 20:45:34.217957 2681 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:34.246077 kubelet[2681]: E0113 20:45:34.245161 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:34.246077 kubelet[2681]: E0113 20:45:34.245545 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:34.246077 kubelet[2681]: E0113 20:45:34.246020 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:34.271828 kubelet[2681]: I0113 20:45:34.271774 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.271714325 podStartE2EDuration="1.271714325s" podCreationTimestamp="2025-01-13 20:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:34.264174828 +0000 UTC m=+1.123261498" watchObservedRunningTime="2025-01-13 20:45:34.271714325 +0000 UTC m=+1.130800985" Jan 13 20:45:34.287321 kubelet[2681]: I0113 20:45:34.287271 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.28720711 podStartE2EDuration="4.28720711s" podCreationTimestamp="2025-01-13 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:34.271933033 +0000 UTC m=+1.131019693" watchObservedRunningTime="2025-01-13 20:45:34.28720711 +0000 UTC m=+1.146293770" Jan 13 20:45:34.330628 kubelet[2681]: I0113 20:45:34.330399 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.330346672 podStartE2EDuration="5.330346672s" podCreationTimestamp="2025-01-13 20:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:34.287978014 +0000 UTC m=+1.147064674" watchObservedRunningTime="2025-01-13 20:45:34.330346672 +0000 UTC m=+1.189433342" Jan 13 20:45:35.247386 kubelet[2681]: E0113 20:45:35.247342 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:35.247386 kubelet[2681]: E0113 20:45:35.247342 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:37.224479 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:37.225994 sshd[1679]: Connection closed by 10.0.0.1 port 53442 Jan 13 20:45:37.226426 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:37.231129 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:53442.service: Deactivated successfully. Jan 13 20:45:37.233623 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:45:37.233870 systemd[1]: session-7.scope: Consumed 4.253s CPU time, 188.1M memory peak, 0B memory swap peak. Jan 13 20:45:37.234676 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:45:37.235919 systemd-logind[1485]: Removed session 7. Jan 13 20:45:37.575400 kubelet[2681]: E0113 20:45:37.575242 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:38.250719 kubelet[2681]: E0113 20:45:38.250682 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:42.434830 kubelet[2681]: E0113 20:45:42.434759 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:43.258402 kubelet[2681]: E0113 20:45:43.258363 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:44.260724 kubelet[2681]: E0113 20:45:44.260662 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:44.331587 kubelet[2681]: E0113 20:45:44.331552 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:44.969501 update_engine[1486]: I20250113 20:45:44.969333 1486 update_attempter.cc:509] Updating boot flags... Jan 13 20:45:45.004433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2781) Jan 13 20:45:45.027524 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2781) Jan 13 20:45:45.071511 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2781) Jan 13 20:45:46.155666 kubelet[2681]: I0113 20:45:46.155618 2681 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:45:46.156409 containerd[1494]: time="2025-01-13T20:45:46.156359581Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:45:46.156828 kubelet[2681]: I0113 20:45:46.156607 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:45:46.677484 kubelet[2681]: I0113 20:45:46.677155 2681 topology_manager.go:215] "Topology Admit Handler" podUID="1330190d-55b5-41c7-8f55-8217ff136868" podNamespace="kube-system" podName="kube-proxy-pqrjg" Jan 13 20:45:46.677951 kubelet[2681]: I0113 20:45:46.677916 2681 topology_manager.go:215] "Topology Admit Handler" podUID="fcf33d75-5855-4eaf-9e14-96e1044db97e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-hqbd6" Jan 13 20:45:46.690140 kubelet[2681]: I0113 20:45:46.690090 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1330190d-55b5-41c7-8f55-8217ff136868-xtables-lock\") pod \"kube-proxy-pqrjg\" (UID: \"1330190d-55b5-41c7-8f55-8217ff136868\") " pod="kube-system/kube-proxy-pqrjg" Jan 13 20:45:46.690140 kubelet[2681]: I0113 20:45:46.690141 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1330190d-55b5-41c7-8f55-8217ff136868-kube-proxy\") pod \"kube-proxy-pqrjg\" (UID: \"1330190d-55b5-41c7-8f55-8217ff136868\") " pod="kube-system/kube-proxy-pqrjg" Jan 13 20:45:46.690311 kubelet[2681]: I0113 20:45:46.690164 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fcf33d75-5855-4eaf-9e14-96e1044db97e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-hqbd6\" (UID: \"fcf33d75-5855-4eaf-9e14-96e1044db97e\") " pod="tigera-operator/tigera-operator-c7ccbd65-hqbd6" Jan 13 20:45:46.690311 kubelet[2681]: I0113 20:45:46.690187 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1330190d-55b5-41c7-8f55-8217ff136868-lib-modules\") pod \"kube-proxy-pqrjg\" (UID: \"1330190d-55b5-41c7-8f55-8217ff136868\") " pod="kube-system/kube-proxy-pqrjg" Jan 13 20:45:46.690311 kubelet[2681]: I0113 20:45:46.690207 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92zb\" (UniqueName: \"kubernetes.io/projected/1330190d-55b5-41c7-8f55-8217ff136868-kube-api-access-g92zb\") pod \"kube-proxy-pqrjg\" (UID: \"1330190d-55b5-41c7-8f55-8217ff136868\") " pod="kube-system/kube-proxy-pqrjg" Jan 13 20:45:46.690311 kubelet[2681]: I0113 20:45:46.690259 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtsb\" (UniqueName: \"kubernetes.io/projected/fcf33d75-5855-4eaf-9e14-96e1044db97e-kube-api-access-4qtsb\") pod \"tigera-operator-c7ccbd65-hqbd6\" (UID: \"fcf33d75-5855-4eaf-9e14-96e1044db97e\") " pod="tigera-operator/tigera-operator-c7ccbd65-hqbd6" Jan 13 20:45:46.690193 systemd[1]: Created slice kubepods-besteffort-pod1330190d_55b5_41c7_8f55_8217ff136868.slice - libcontainer container kubepods-besteffort-pod1330190d_55b5_41c7_8f55_8217ff136868.slice. Jan 13 20:45:46.693189 systemd[1]: Created slice kubepods-besteffort-podfcf33d75_5855_4eaf_9e14_96e1044db97e.slice - libcontainer container kubepods-besteffort-podfcf33d75_5855_4eaf_9e14_96e1044db97e.slice. Jan 13 20:45:47.000765 kubelet[2681]: E0113 20:45:47.000621 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:47.001295 containerd[1494]: time="2025-01-13T20:45:47.001167362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqrjg,Uid:1330190d-55b5-41c7-8f55-8217ff136868,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:47.001569 containerd[1494]: time="2025-01-13T20:45:47.001398369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hqbd6,Uid:fcf33d75-5855-4eaf-9e14-96e1044db97e,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239097878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239144475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239157971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.238994281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239054896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239069894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:47.239275 containerd[1494]: time="2025-01-13T20:45:47.239146689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:47.240291 containerd[1494]: time="2025-01-13T20:45:47.240166909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:47.267605 systemd[1]: Started cri-containerd-1dab2ea3cadc7dcbdbddddf0993b38b6fe428132d7703e70a6901d57df680187.scope - libcontainer container 1dab2ea3cadc7dcbdbddddf0993b38b6fe428132d7703e70a6901d57df680187. Jan 13 20:45:47.270791 systemd[1]: Started cri-containerd-8b85a5db9873556b178ab6133518198f7fe9a9a77526bc1b279492a7c3fa4300.scope - libcontainer container 8b85a5db9873556b178ab6133518198f7fe9a9a77526bc1b279492a7c3fa4300. Jan 13 20:45:47.297516 containerd[1494]: time="2025-01-13T20:45:47.297466377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqrjg,Uid:1330190d-55b5-41c7-8f55-8217ff136868,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b85a5db9873556b178ab6133518198f7fe9a9a77526bc1b279492a7c3fa4300\"" Jan 13 20:45:47.298438 kubelet[2681]: E0113 20:45:47.298409 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:47.300882 containerd[1494]: time="2025-01-13T20:45:47.300838333Z" level=info msg="CreateContainer within sandbox \"8b85a5db9873556b178ab6133518198f7fe9a9a77526bc1b279492a7c3fa4300\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:45:47.313562 containerd[1494]: time="2025-01-13T20:45:47.313526828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-hqbd6,Uid:fcf33d75-5855-4eaf-9e14-96e1044db97e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1dab2ea3cadc7dcbdbddddf0993b38b6fe428132d7703e70a6901d57df680187\"" Jan 13 20:45:47.314966 containerd[1494]: time="2025-01-13T20:45:47.314921145Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:45:47.537316 containerd[1494]: time="2025-01-13T20:45:47.537162952Z" level=info msg="CreateContainer within sandbox \"8b85a5db9873556b178ab6133518198f7fe9a9a77526bc1b279492a7c3fa4300\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e4d549c613bbd582c81ebaee14d01f262c16f8c16d89249ecd5d695e2fe877e\"" Jan 13 20:45:47.538060 containerd[1494]: time="2025-01-13T20:45:47.538028398Z" level=info msg="StartContainer for \"5e4d549c613bbd582c81ebaee14d01f262c16f8c16d89249ecd5d695e2fe877e\"" Jan 13 20:45:47.567708 systemd[1]: Started cri-containerd-5e4d549c613bbd582c81ebaee14d01f262c16f8c16d89249ecd5d695e2fe877e.scope - libcontainer container 5e4d549c613bbd582c81ebaee14d01f262c16f8c16d89249ecd5d695e2fe877e. Jan 13 20:45:47.602532 containerd[1494]: time="2025-01-13T20:45:47.602495921Z" level=info msg="StartContainer for \"5e4d549c613bbd582c81ebaee14d01f262c16f8c16d89249ecd5d695e2fe877e\" returns successfully" Jan 13 20:45:48.270118 kubelet[2681]: E0113 20:45:48.270071 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:49.304969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60176397.mount: Deactivated successfully. Jan 13 20:45:49.589574 containerd[1494]: time="2025-01-13T20:45:49.589428132Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:49.590369 containerd[1494]: time="2025-01-13T20:45:49.590337599Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764321" Jan 13 20:45:49.591647 containerd[1494]: time="2025-01-13T20:45:49.591619712Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:49.593845 containerd[1494]: time="2025-01-13T20:45:49.593801724Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:49.594492 containerd[1494]: time="2025-01-13T20:45:49.594428398Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.279461446s" Jan 13 20:45:49.594492 containerd[1494]: time="2025-01-13T20:45:49.594485436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:45:49.595971 containerd[1494]: time="2025-01-13T20:45:49.595936447Z" level=info msg="CreateContainer within sandbox \"1dab2ea3cadc7dcbdbddddf0993b38b6fe428132d7703e70a6901d57df680187\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:45:49.608905 containerd[1494]: time="2025-01-13T20:45:49.608858749Z" level=info msg="CreateContainer within sandbox \"1dab2ea3cadc7dcbdbddddf0993b38b6fe428132d7703e70a6901d57df680187\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5221a24d2d4c915f7f87699186bdb37b793b837aa4a67c9c6855ea6225630713\"" Jan 13 20:45:49.609437 containerd[1494]: time="2025-01-13T20:45:49.609391275Z" level=info msg="StartContainer for \"5221a24d2d4c915f7f87699186bdb37b793b837aa4a67c9c6855ea6225630713\"" Jan 13 20:45:49.637593 systemd[1]: Started cri-containerd-5221a24d2d4c915f7f87699186bdb37b793b837aa4a67c9c6855ea6225630713.scope - libcontainer container 5221a24d2d4c915f7f87699186bdb37b793b837aa4a67c9c6855ea6225630713. Jan 13 20:45:49.665007 containerd[1494]: time="2025-01-13T20:45:49.664945735Z" level=info msg="StartContainer for \"5221a24d2d4c915f7f87699186bdb37b793b837aa4a67c9c6855ea6225630713\" returns successfully" Jan 13 20:45:50.284001 kubelet[2681]: I0113 20:45:50.283631 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pqrjg" podStartSLOduration=4.283597881 podStartE2EDuration="4.283597881s" podCreationTimestamp="2025-01-13 20:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:48.279689967 +0000 UTC m=+15.138776627" watchObservedRunningTime="2025-01-13 20:45:50.283597881 +0000 UTC m=+17.142684531" Jan 13 20:45:50.284001 kubelet[2681]: I0113 20:45:50.283738 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-hqbd6" podStartSLOduration=2.003461939 podStartE2EDuration="4.283717998s" podCreationTimestamp="2025-01-13 20:45:46 +0000 UTC" firstStartedPulling="2025-01-13 20:45:47.314519695 +0000 UTC m=+14.173606345" lastFinishedPulling="2025-01-13 20:45:49.594775744 +0000 UTC m=+16.453862404" observedRunningTime="2025-01-13 20:45:50.283409916 +0000 UTC m=+17.142496586" watchObservedRunningTime="2025-01-13 20:45:50.283717998 +0000 UTC m=+17.142804658" Jan 13 20:45:52.675930 kubelet[2681]: I0113 20:45:52.675010 2681 topology_manager.go:215] "Topology Admit Handler" podUID="345d9dc1-5ab2-404b-9fb8-8e877d356655" podNamespace="calico-system" podName="calico-typha-dfdbc7dcc-hcv48" Jan 13 20:45:52.691350 systemd[1]: Created slice kubepods-besteffort-pod345d9dc1_5ab2_404b_9fb8_8e877d356655.slice - libcontainer container kubepods-besteffort-pod345d9dc1_5ab2_404b_9fb8_8e877d356655.slice. Jan 13 20:45:52.723903 kubelet[2681]: I0113 20:45:52.723847 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/345d9dc1-5ab2-404b-9fb8-8e877d356655-tigera-ca-bundle\") pod \"calico-typha-dfdbc7dcc-hcv48\" (UID: \"345d9dc1-5ab2-404b-9fb8-8e877d356655\") " pod="calico-system/calico-typha-dfdbc7dcc-hcv48" Jan 13 20:45:52.723903 kubelet[2681]: I0113 20:45:52.723890 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/345d9dc1-5ab2-404b-9fb8-8e877d356655-typha-certs\") pod \"calico-typha-dfdbc7dcc-hcv48\" (UID: \"345d9dc1-5ab2-404b-9fb8-8e877d356655\") " pod="calico-system/calico-typha-dfdbc7dcc-hcv48" Jan 13 20:45:52.723903 kubelet[2681]: I0113 20:45:52.723910 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqb59\" (UniqueName: \"kubernetes.io/projected/345d9dc1-5ab2-404b-9fb8-8e877d356655-kube-api-access-dqb59\") pod \"calico-typha-dfdbc7dcc-hcv48\" (UID: \"345d9dc1-5ab2-404b-9fb8-8e877d356655\") " pod="calico-system/calico-typha-dfdbc7dcc-hcv48" Jan 13 20:45:52.761958 kubelet[2681]: I0113 20:45:52.759646 2681 topology_manager.go:215] "Topology Admit Handler" podUID="5e04c420-4e2b-422c-9576-53c2b099a388" podNamespace="calico-system" podName="calico-node-8cktq" Jan 13 20:45:52.768418 systemd[1]: Created slice kubepods-besteffort-pod5e04c420_4e2b_422c_9576_53c2b099a388.slice - libcontainer container kubepods-besteffort-pod5e04c420_4e2b_422c_9576_53c2b099a388.slice. Jan 13 20:45:52.824215 kubelet[2681]: I0113 20:45:52.824156 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-policysync\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824215 kubelet[2681]: I0113 20:45:52.824211 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-var-run-calico\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824394 kubelet[2681]: I0113 20:45:52.824249 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-lib-modules\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824394 kubelet[2681]: I0113 20:45:52.824273 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e04c420-4e2b-422c-9576-53c2b099a388-node-certs\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824394 kubelet[2681]: I0113 20:45:52.824292 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-var-lib-calico\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824394 kubelet[2681]: I0113 20:45:52.824310 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-cni-log-dir\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824394 kubelet[2681]: I0113 20:45:52.824332 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-xtables-lock\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824533 kubelet[2681]: I0113 20:45:52.824350 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-cni-net-dir\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824533 kubelet[2681]: I0113 20:45:52.824393 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-cni-bin-dir\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.824533 kubelet[2681]: I0113 20:45:52.824499 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e04c420-4e2b-422c-9576-53c2b099a388-flexvol-driver-host\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.825507 kubelet[2681]: I0113 20:45:52.825405 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e04c420-4e2b-422c-9576-53c2b099a388-tigera-ca-bundle\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.825507 kubelet[2681]: I0113 20:45:52.825434 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7m7\" (UniqueName: \"kubernetes.io/projected/5e04c420-4e2b-422c-9576-53c2b099a388-kube-api-access-5v7m7\") pod \"calico-node-8cktq\" (UID: \"5e04c420-4e2b-422c-9576-53c2b099a388\") " pod="calico-system/calico-node-8cktq" Jan 13 20:45:52.854188 kubelet[2681]: I0113 20:45:52.854127 2681 topology_manager.go:215] "Topology Admit Handler" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" podNamespace="calico-system" podName="csi-node-driver-n9xm5" Jan 13 20:45:52.854693 kubelet[2681]: E0113 20:45:52.854423 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:45:52.927342 kubelet[2681]: I0113 20:45:52.925690 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39e13210-d183-473d-999b-c81aa9bc8ccf-registration-dir\") pod \"csi-node-driver-n9xm5\" (UID: \"39e13210-d183-473d-999b-c81aa9bc8ccf\") " pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:45:52.927342 kubelet[2681]: I0113 20:45:52.925771 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e13210-d183-473d-999b-c81aa9bc8ccf-kubelet-dir\") pod \"csi-node-driver-n9xm5\" (UID: \"39e13210-d183-473d-999b-c81aa9bc8ccf\") " pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:45:52.927342 kubelet[2681]: I0113 20:45:52.925798 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39e13210-d183-473d-999b-c81aa9bc8ccf-socket-dir\") pod \"csi-node-driver-n9xm5\" (UID: \"39e13210-d183-473d-999b-c81aa9bc8ccf\") " pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:45:52.927342 kubelet[2681]: I0113 20:45:52.925847 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39e13210-d183-473d-999b-c81aa9bc8ccf-varrun\") pod \"csi-node-driver-n9xm5\" (UID: \"39e13210-d183-473d-999b-c81aa9bc8ccf\") " pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:45:52.927342 kubelet[2681]: I0113 20:45:52.925867 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8x8g\" (UniqueName: \"kubernetes.io/projected/39e13210-d183-473d-999b-c81aa9bc8ccf-kube-api-access-n8x8g\") pod \"csi-node-driver-n9xm5\" (UID: \"39e13210-d183-473d-999b-c81aa9bc8ccf\") " pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:45:52.928655 kubelet[2681]: E0113 20:45:52.928418 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.928655 kubelet[2681]: W0113 20:45:52.928446 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.928655 kubelet[2681]: E0113 20:45:52.928518 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.934610 kubelet[2681]: E0113 20:45:52.934580 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.934610 kubelet[2681]: W0113 20:45:52.934604 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.936357 kubelet[2681]: E0113 20:45:52.934632 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.936357 kubelet[2681]: E0113 20:45:52.934911 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.936357 kubelet[2681]: W0113 20:45:52.934923 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.936357 kubelet[2681]: E0113 20:45:52.934944 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.996242 kubelet[2681]: E0113 20:45:52.996174 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:52.996941 containerd[1494]: time="2025-01-13T20:45:52.996851068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dfdbc7dcc-hcv48,Uid:345d9dc1-5ab2-404b-9fb8-8e877d356655,Namespace:calico-system,Attempt:0,}" Jan 13 20:45:53.027012 kubelet[2681]: E0113 20:45:53.026979 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.027012 kubelet[2681]: W0113 20:45:53.027004 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.027605 kubelet[2681]: E0113 20:45:53.027057 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.027605 kubelet[2681]: E0113 20:45:53.027429 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.027605 kubelet[2681]: W0113 20:45:53.027439 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.027605 kubelet[2681]: E0113 20:45:53.027499 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.028020 kubelet[2681]: E0113 20:45:53.027995 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.028171 kubelet[2681]: W0113 20:45:53.028105 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.028171 kubelet[2681]: E0113 20:45:53.028130 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.028481 kubelet[2681]: E0113 20:45:53.028440 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.028585 kubelet[2681]: W0113 20:45:53.028485 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.028585 kubelet[2681]: E0113 20:45:53.028509 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.028885 kubelet[2681]: E0113 20:45:53.028858 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.028885 kubelet[2681]: W0113 20:45:53.028873 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.029654 kubelet[2681]: E0113 20:45:53.028987 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.029654 kubelet[2681]: E0113 20:45:53.029573 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.029654 kubelet[2681]: W0113 20:45:53.029590 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.029748 kubelet[2681]: E0113 20:45:53.029692 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.029903 containerd[1494]: time="2025-01-13T20:45:53.029660527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:53.029903 containerd[1494]: time="2025-01-13T20:45:53.029734527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:53.029903 containerd[1494]: time="2025-01-13T20:45:53.029749155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:53.030640 containerd[1494]: time="2025-01-13T20:45:53.030518326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:53.030689 kubelet[2681]: E0113 20:45:53.030621 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.030689 kubelet[2681]: W0113 20:45:53.030645 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.030866 kubelet[2681]: E0113 20:45:53.030800 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.031206 kubelet[2681]: E0113 20:45:53.031153 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.031256 kubelet[2681]: W0113 20:45:53.031206 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.031595 kubelet[2681]: E0113 20:45:53.031520 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.031595 kubelet[2681]: E0113 20:45:53.031572 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.031595 kubelet[2681]: W0113 20:45:53.031581 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.031802 kubelet[2681]: E0113 20:45:53.031643 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.032173 kubelet[2681]: E0113 20:45:53.032144 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.032173 kubelet[2681]: W0113 20:45:53.032158 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.032251 kubelet[2681]: E0113 20:45:53.032178 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.033582 kubelet[2681]: E0113 20:45:53.033554 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.033582 kubelet[2681]: W0113 20:45:53.033567 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.033582 kubelet[2681]: E0113 20:45:53.033581 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.034283 kubelet[2681]: E0113 20:45:53.034259 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.034283 kubelet[2681]: W0113 20:45:53.034277 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.034472 kubelet[2681]: E0113 20:45:53.034374 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.034728 kubelet[2681]: E0113 20:45:53.034686 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.034728 kubelet[2681]: W0113 20:45:53.034700 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.034919 kubelet[2681]: E0113 20:45:53.034759 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.035000 kubelet[2681]: E0113 20:45:53.034978 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.035000 kubelet[2681]: W0113 20:45:53.034988 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.035085 kubelet[2681]: E0113 20:45:53.035042 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.035324 kubelet[2681]: E0113 20:45:53.035294 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.035324 kubelet[2681]: W0113 20:45:53.035309 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.035419 kubelet[2681]: E0113 20:45:53.035405 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.035781 kubelet[2681]: E0113 20:45:53.035746 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.035944 kubelet[2681]: W0113 20:45:53.035861 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.035944 kubelet[2681]: E0113 20:45:53.035883 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.036422 kubelet[2681]: E0113 20:45:53.036270 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.036422 kubelet[2681]: W0113 20:45:53.036283 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.036422 kubelet[2681]: E0113 20:45:53.036339 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.036668 kubelet[2681]: E0113 20:45:53.036611 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.036668 kubelet[2681]: W0113 20:45:53.036624 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.036668 kubelet[2681]: E0113 20:45:53.036647 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.037238 kubelet[2681]: E0113 20:45:53.037161 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.037500 kubelet[2681]: W0113 20:45:53.037372 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.037606 kubelet[2681]: E0113 20:45:53.037552 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.038053 kubelet[2681]: E0113 20:45:53.037913 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.038053 kubelet[2681]: W0113 20:45:53.037924 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.038053 kubelet[2681]: E0113 20:45:53.037967 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.038282 kubelet[2681]: E0113 20:45:53.038262 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.038282 kubelet[2681]: W0113 20:45:53.038279 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.038392 kubelet[2681]: E0113 20:45:53.038331 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.038652 kubelet[2681]: E0113 20:45:53.038635 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.038652 kubelet[2681]: W0113 20:45:53.038649 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.038729 kubelet[2681]: E0113 20:45:53.038670 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.038941 kubelet[2681]: E0113 20:45:53.038924 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.038941 kubelet[2681]: W0113 20:45:53.038938 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.039010 kubelet[2681]: E0113 20:45:53.038952 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.039542 kubelet[2681]: E0113 20:45:53.039525 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.039542 kubelet[2681]: W0113 20:45:53.039539 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.039618 kubelet[2681]: E0113 20:45:53.039551 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.039974 kubelet[2681]: E0113 20:45:53.039955 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.039974 kubelet[2681]: W0113 20:45:53.039972 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.040051 kubelet[2681]: E0113 20:45:53.039986 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.050344 kubelet[2681]: E0113 20:45:53.050317 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:53.050528 kubelet[2681]: W0113 20:45:53.050498 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:53.050528 kubelet[2681]: E0113 20:45:53.050529 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:53.056604 systemd[1]: Started cri-containerd-868e9d44ab3fe35e41011a0ef9be2388b909d399cfb867aca54acef93917c23f.scope - libcontainer container 868e9d44ab3fe35e41011a0ef9be2388b909d399cfb867aca54acef93917c23f. Jan 13 20:45:53.071641 kubelet[2681]: E0113 20:45:53.071603 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:53.072164 containerd[1494]: time="2025-01-13T20:45:53.072119972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8cktq,Uid:5e04c420-4e2b-422c-9576-53c2b099a388,Namespace:calico-system,Attempt:0,}" Jan 13 20:45:53.094879 containerd[1494]: time="2025-01-13T20:45:53.094814543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dfdbc7dcc-hcv48,Uid:345d9dc1-5ab2-404b-9fb8-8e877d356655,Namespace:calico-system,Attempt:0,} returns sandbox id \"868e9d44ab3fe35e41011a0ef9be2388b909d399cfb867aca54acef93917c23f\"" Jan 13 20:45:53.097524 kubelet[2681]: E0113 20:45:53.097481 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:53.098771 containerd[1494]: time="2025-01-13T20:45:53.098558263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:45:53.479550 containerd[1494]: time="2025-01-13T20:45:53.479366936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:53.479550 containerd[1494]: time="2025-01-13T20:45:53.479436858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:53.479550 containerd[1494]: time="2025-01-13T20:45:53.479449942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:53.479811 containerd[1494]: time="2025-01-13T20:45:53.479558347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:53.500656 systemd[1]: Started cri-containerd-6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50.scope - libcontainer container 6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50. Jan 13 20:45:53.526871 containerd[1494]: time="2025-01-13T20:45:53.526820730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8cktq,Uid:5e04c420-4e2b-422c-9576-53c2b099a388,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\"" Jan 13 20:45:53.527742 kubelet[2681]: E0113 20:45:53.527695 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:54.229818 kubelet[2681]: E0113 20:45:54.229779 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:45:55.825108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687407865.mount: Deactivated successfully. Jan 13 20:45:56.230198 kubelet[2681]: E0113 20:45:56.230122 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:45:57.456784 containerd[1494]: time="2025-01-13T20:45:57.456715861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.457862 containerd[1494]: time="2025-01-13T20:45:57.457817976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:45:57.459331 containerd[1494]: time="2025-01-13T20:45:57.459295910Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.461723 containerd[1494]: time="2025-01-13T20:45:57.461686493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:57.462331 containerd[1494]: time="2025-01-13T20:45:57.462282335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.363679568s" Jan 13 20:45:57.462331 containerd[1494]: time="2025-01-13T20:45:57.462322370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:45:57.465140 containerd[1494]: time="2025-01-13T20:45:57.465092628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:45:57.477665 containerd[1494]: time="2025-01-13T20:45:57.477627086Z" level=info msg="CreateContainer within sandbox \"868e9d44ab3fe35e41011a0ef9be2388b909d399cfb867aca54acef93917c23f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:45:57.492075 containerd[1494]: time="2025-01-13T20:45:57.492022819Z" level=info msg="CreateContainer within sandbox \"868e9d44ab3fe35e41011a0ef9be2388b909d399cfb867aca54acef93917c23f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a81160c76ab99e297447a35c0e347f9f216ea546435b75247bea3d15599e454\"" Jan 13 20:45:57.492604 containerd[1494]: time="2025-01-13T20:45:57.492560843Z" level=info msg="StartContainer for \"6a81160c76ab99e297447a35c0e347f9f216ea546435b75247bea3d15599e454\"" Jan 13 20:45:57.521612 systemd[1]: Started cri-containerd-6a81160c76ab99e297447a35c0e347f9f216ea546435b75247bea3d15599e454.scope - libcontainer container 6a81160c76ab99e297447a35c0e347f9f216ea546435b75247bea3d15599e454. Jan 13 20:45:57.566687 containerd[1494]: time="2025-01-13T20:45:57.566552270Z" level=info msg="StartContainer for \"6a81160c76ab99e297447a35c0e347f9f216ea546435b75247bea3d15599e454\" returns successfully" Jan 13 20:45:58.233785 kubelet[2681]: E0113 20:45:58.233726 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:45:58.292325 kubelet[2681]: E0113 20:45:58.292278 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:58.358065 kubelet[2681]: I0113 20:45:58.358019 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-dfdbc7dcc-hcv48" podStartSLOduration=1.991308658 podStartE2EDuration="6.357980594s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:45:53.098170752 +0000 UTC m=+19.957257422" lastFinishedPulling="2025-01-13 20:45:57.464842708 +0000 UTC m=+24.323929358" observedRunningTime="2025-01-13 20:45:58.357655431 +0000 UTC m=+25.216742091" watchObservedRunningTime="2025-01-13 20:45:58.357980594 +0000 UTC m=+25.217067254" Jan 13 20:45:58.365161 kubelet[2681]: E0113 20:45:58.365137 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.365161 kubelet[2681]: W0113 20:45:58.365154 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.365161 kubelet[2681]: E0113 20:45:58.365170 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.365433 kubelet[2681]: E0113 20:45:58.365371 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.365433 kubelet[2681]: W0113 20:45:58.365379 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.365433 kubelet[2681]: E0113 20:45:58.365389 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.365577 kubelet[2681]: E0113 20:45:58.365557 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.365577 kubelet[2681]: W0113 20:45:58.365564 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.365577 kubelet[2681]: E0113 20:45:58.365573 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.365769 kubelet[2681]: E0113 20:45:58.365750 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.365769 kubelet[2681]: W0113 20:45:58.365760 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.365769 kubelet[2681]: E0113 20:45:58.365769 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.365952 kubelet[2681]: E0113 20:45:58.365932 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.365952 kubelet[2681]: W0113 20:45:58.365939 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.365952 kubelet[2681]: E0113 20:45:58.365948 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.366139 kubelet[2681]: E0113 20:45:58.366111 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.366139 kubelet[2681]: W0113 20:45:58.366131 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.366139 kubelet[2681]: E0113 20:45:58.366141 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.366327 kubelet[2681]: E0113 20:45:58.366304 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.366327 kubelet[2681]: W0113 20:45:58.366315 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.366327 kubelet[2681]: E0113 20:45:58.366324 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.366608 kubelet[2681]: E0113 20:45:58.366584 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.366642 kubelet[2681]: W0113 20:45:58.366607 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.366642 kubelet[2681]: E0113 20:45:58.366635 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.366973 kubelet[2681]: E0113 20:45:58.366953 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.366973 kubelet[2681]: W0113 20:45:58.366969 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.367127 kubelet[2681]: E0113 20:45:58.366986 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.367251 kubelet[2681]: E0113 20:45:58.367232 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.367251 kubelet[2681]: W0113 20:45:58.367249 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.367320 kubelet[2681]: E0113 20:45:58.367265 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.367544 kubelet[2681]: E0113 20:45:58.367526 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.367544 kubelet[2681]: W0113 20:45:58.367541 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.367614 kubelet[2681]: E0113 20:45:58.367557 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.367814 kubelet[2681]: E0113 20:45:58.367798 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.367814 kubelet[2681]: W0113 20:45:58.367812 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.367872 kubelet[2681]: E0113 20:45:58.367827 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.368087 kubelet[2681]: E0113 20:45:58.368070 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.368087 kubelet[2681]: W0113 20:45:58.368083 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.368178 kubelet[2681]: E0113 20:45:58.368099 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.368340 kubelet[2681]: E0113 20:45:58.368323 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.368340 kubelet[2681]: W0113 20:45:58.368338 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.368390 kubelet[2681]: E0113 20:45:58.368352 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.368628 kubelet[2681]: E0113 20:45:58.368610 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.368628 kubelet[2681]: W0113 20:45:58.368625 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.368699 kubelet[2681]: E0113 20:45:58.368639 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.372892 kubelet[2681]: E0113 20:45:58.372867 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.372892 kubelet[2681]: W0113 20:45:58.372881 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.372964 kubelet[2681]: E0113 20:45:58.372895 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.373158 kubelet[2681]: E0113 20:45:58.373125 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.373158 kubelet[2681]: W0113 20:45:58.373143 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.373158 kubelet[2681]: E0113 20:45:58.373162 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.373394 kubelet[2681]: E0113 20:45:58.373375 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.373394 kubelet[2681]: W0113 20:45:58.373390 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.373488 kubelet[2681]: E0113 20:45:58.373410 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.373650 kubelet[2681]: E0113 20:45:58.373621 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.373650 kubelet[2681]: W0113 20:45:58.373636 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.373735 kubelet[2681]: E0113 20:45:58.373653 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.373884 kubelet[2681]: E0113 20:45:58.373864 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.373884 kubelet[2681]: W0113 20:45:58.373877 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.373957 kubelet[2681]: E0113 20:45:58.373892 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.374101 kubelet[2681]: E0113 20:45:58.374083 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.374101 kubelet[2681]: W0113 20:45:58.374094 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.374213 kubelet[2681]: E0113 20:45:58.374107 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.374364 kubelet[2681]: E0113 20:45:58.374343 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.374364 kubelet[2681]: W0113 20:45:58.374361 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.374471 kubelet[2681]: E0113 20:45:58.374380 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.374633 kubelet[2681]: E0113 20:45:58.374614 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.374633 kubelet[2681]: W0113 20:45:58.374629 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.374705 kubelet[2681]: E0113 20:45:58.374660 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.374829 kubelet[2681]: E0113 20:45:58.374811 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.374829 kubelet[2681]: W0113 20:45:58.374823 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.374908 kubelet[2681]: E0113 20:45:58.374860 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.374997 kubelet[2681]: E0113 20:45:58.374981 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.374997 kubelet[2681]: W0113 20:45:58.374992 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.375154 kubelet[2681]: E0113 20:45:58.375006 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.375258 kubelet[2681]: E0113 20:45:58.375239 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.375258 kubelet[2681]: W0113 20:45:58.375252 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.375329 kubelet[2681]: E0113 20:45:58.375273 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.375620 kubelet[2681]: E0113 20:45:58.375586 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.375620 kubelet[2681]: W0113 20:45:58.375604 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.375620 kubelet[2681]: E0113 20:45:58.375624 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.375890 kubelet[2681]: E0113 20:45:58.375871 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.375890 kubelet[2681]: W0113 20:45:58.375884 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.375962 kubelet[2681]: E0113 20:45:58.375909 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.376299 kubelet[2681]: E0113 20:45:58.376275 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.376299 kubelet[2681]: W0113 20:45:58.376289 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.376378 kubelet[2681]: E0113 20:45:58.376343 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.376658 kubelet[2681]: E0113 20:45:58.376549 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.376658 kubelet[2681]: W0113 20:45:58.376560 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.376658 kubelet[2681]: E0113 20:45:58.376645 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.376921 kubelet[2681]: E0113 20:45:58.376884 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.376921 kubelet[2681]: W0113 20:45:58.376899 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.377040 kubelet[2681]: E0113 20:45:58.376940 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.377183 kubelet[2681]: E0113 20:45:58.377169 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.377183 kubelet[2681]: W0113 20:45:58.377180 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.377241 kubelet[2681]: E0113 20:45:58.377191 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.378092 kubelet[2681]: E0113 20:45:58.378079 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.378092 kubelet[2681]: W0113 20:45:58.378090 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.378170 kubelet[2681]: E0113 20:45:58.378100 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.292850 kubelet[2681]: I0113 20:45:59.292814 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:45:59.293370 kubelet[2681]: E0113 20:45:59.293345 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:59.376908 kubelet[2681]: E0113 20:45:59.376873 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.376908 kubelet[2681]: W0113 20:45:59.376895 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.376908 kubelet[2681]: E0113 20:45:59.376915 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.377147 kubelet[2681]: E0113 20:45:59.377131 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.377183 kubelet[2681]: W0113 20:45:59.377146 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.377183 kubelet[2681]: E0113 20:45:59.377161 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.377394 kubelet[2681]: E0113 20:45:59.377368 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.377394 kubelet[2681]: W0113 20:45:59.377391 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.377478 kubelet[2681]: E0113 20:45:59.377404 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.377649 kubelet[2681]: E0113 20:45:59.377622 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.377649 kubelet[2681]: W0113 20:45:59.377636 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.377649 kubelet[2681]: E0113 20:45:59.377648 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.377880 kubelet[2681]: E0113 20:45:59.377859 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.377880 kubelet[2681]: W0113 20:45:59.377872 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.377975 kubelet[2681]: E0113 20:45:59.377885 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.378116 kubelet[2681]: E0113 20:45:59.378089 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.378116 kubelet[2681]: W0113 20:45:59.378114 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.378185 kubelet[2681]: E0113 20:45:59.378129 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.378381 kubelet[2681]: E0113 20:45:59.378360 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.378381 kubelet[2681]: W0113 20:45:59.378373 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.378381 kubelet[2681]: E0113 20:45:59.378386 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.378672 kubelet[2681]: E0113 20:45:59.378645 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.378672 kubelet[2681]: W0113 20:45:59.378657 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.378672 kubelet[2681]: E0113 20:45:59.378671 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.378912 kubelet[2681]: E0113 20:45:59.378896 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.378912 kubelet[2681]: W0113 20:45:59.378908 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.379007 kubelet[2681]: E0113 20:45:59.378921 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.379173 kubelet[2681]: E0113 20:45:59.379155 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.379173 kubelet[2681]: W0113 20:45:59.379168 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.379263 kubelet[2681]: E0113 20:45:59.379182 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.379390 kubelet[2681]: E0113 20:45:59.379373 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.379390 kubelet[2681]: W0113 20:45:59.379385 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.379543 kubelet[2681]: E0113 20:45:59.379398 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.379634 kubelet[2681]: E0113 20:45:59.379616 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.379634 kubelet[2681]: W0113 20:45:59.379628 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.379724 kubelet[2681]: E0113 20:45:59.379641 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.379901 kubelet[2681]: E0113 20:45:59.379886 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.379901 kubelet[2681]: W0113 20:45:59.379899 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.379986 kubelet[2681]: E0113 20:45:59.379912 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.380157 kubelet[2681]: E0113 20:45:59.380144 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.380157 kubelet[2681]: W0113 20:45:59.380154 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.380203 kubelet[2681]: E0113 20:45:59.380164 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.380330 kubelet[2681]: E0113 20:45:59.380319 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.380330 kubelet[2681]: W0113 20:45:59.380328 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.380386 kubelet[2681]: E0113 20:45:59.380340 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.380622 kubelet[2681]: E0113 20:45:59.380608 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.380622 kubelet[2681]: W0113 20:45:59.380620 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.380674 kubelet[2681]: E0113 20:45:59.380630 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.380863 kubelet[2681]: E0113 20:45:59.380851 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.380863 kubelet[2681]: W0113 20:45:59.380862 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.380914 kubelet[2681]: E0113 20:45:59.380877 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.381090 kubelet[2681]: E0113 20:45:59.381075 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.381090 kubelet[2681]: W0113 20:45:59.381088 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.381147 kubelet[2681]: E0113 20:45:59.381115 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.381323 kubelet[2681]: E0113 20:45:59.381311 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.381323 kubelet[2681]: W0113 20:45:59.381321 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.381379 kubelet[2681]: E0113 20:45:59.381335 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.381561 kubelet[2681]: E0113 20:45:59.381547 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.381561 kubelet[2681]: W0113 20:45:59.381560 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.381630 kubelet[2681]: E0113 20:45:59.381578 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.381803 kubelet[2681]: E0113 20:45:59.381789 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.381803 kubelet[2681]: W0113 20:45:59.381801 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.381856 kubelet[2681]: E0113 20:45:59.381817 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.382039 kubelet[2681]: E0113 20:45:59.382027 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.382039 kubelet[2681]: W0113 20:45:59.382037 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.382086 kubelet[2681]: E0113 20:45:59.382072 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.382265 kubelet[2681]: E0113 20:45:59.382250 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.382265 kubelet[2681]: W0113 20:45:59.382264 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.382325 kubelet[2681]: E0113 20:45:59.382301 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.382502 kubelet[2681]: E0113 20:45:59.382487 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.382539 kubelet[2681]: W0113 20:45:59.382501 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.382539 kubelet[2681]: E0113 20:45:59.382520 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.382749 kubelet[2681]: E0113 20:45:59.382736 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.382776 kubelet[2681]: W0113 20:45:59.382748 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.382776 kubelet[2681]: E0113 20:45:59.382766 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.382991 kubelet[2681]: E0113 20:45:59.382978 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.382991 kubelet[2681]: W0113 20:45:59.382989 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.383042 kubelet[2681]: E0113 20:45:59.383005 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.383265 kubelet[2681]: E0113 20:45:59.383251 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.383265 kubelet[2681]: W0113 20:45:59.383264 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.383317 kubelet[2681]: E0113 20:45:59.383280 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.383549 kubelet[2681]: E0113 20:45:59.383536 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.383549 kubelet[2681]: W0113 20:45:59.383547 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.383602 kubelet[2681]: E0113 20:45:59.383562 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.383744 kubelet[2681]: E0113 20:45:59.383733 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.383744 kubelet[2681]: W0113 20:45:59.383742 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.383793 kubelet[2681]: E0113 20:45:59.383772 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.383936 kubelet[2681]: E0113 20:45:59.383925 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.383936 kubelet[2681]: W0113 20:45:59.383934 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.383982 kubelet[2681]: E0113 20:45:59.383962 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.384334 kubelet[2681]: E0113 20:45:59.384191 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.384334 kubelet[2681]: W0113 20:45:59.384206 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.384334 kubelet[2681]: E0113 20:45:59.384228 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.384477 kubelet[2681]: E0113 20:45:59.384434 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.384477 kubelet[2681]: W0113 20:45:59.384468 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.384552 kubelet[2681]: E0113 20:45:59.384483 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.385989 kubelet[2681]: E0113 20:45:59.385966 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:59.385989 kubelet[2681]: W0113 20:45:59.385981 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:59.386058 kubelet[2681]: E0113 20:45:59.385996 2681 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:46:00.229588 kubelet[2681]: E0113 20:46:00.229538 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:00.358593 containerd[1494]: time="2025-01-13T20:46:00.358526223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.359817 containerd[1494]: time="2025-01-13T20:46:00.359736231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:46:00.361004 containerd[1494]: time="2025-01-13T20:46:00.360955084Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.363573 containerd[1494]: time="2025-01-13T20:46:00.363534018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:00.364219 containerd[1494]: time="2025-01-13T20:46:00.364179503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.899049864s" Jan 13 20:46:00.364219 containerd[1494]: time="2025-01-13T20:46:00.364216172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:46:00.365901 containerd[1494]: time="2025-01-13T20:46:00.365872248Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:46:00.381899 containerd[1494]: time="2025-01-13T20:46:00.381848225Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26\"" Jan 13 20:46:00.382464 containerd[1494]: time="2025-01-13T20:46:00.382422826Z" level=info msg="StartContainer for \"c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26\"" Jan 13 20:46:00.414636 systemd[1]: Started cri-containerd-c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26.scope - libcontainer container c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26. Jan 13 20:46:00.451589 containerd[1494]: time="2025-01-13T20:46:00.451541854Z" level=info msg="StartContainer for \"c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26\" returns successfully" Jan 13 20:46:00.467231 systemd[1]: cri-containerd-c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26.scope: Deactivated successfully. Jan 13 20:46:00.492172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26-rootfs.mount: Deactivated successfully. Jan 13 20:46:00.504153 containerd[1494]: time="2025-01-13T20:46:00.504092391Z" level=info msg="shim disconnected" id=c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26 namespace=k8s.io Jan 13 20:46:00.504153 containerd[1494]: time="2025-01-13T20:46:00.504148387Z" level=warning msg="cleaning up after shim disconnected" id=c860576df1e76a55cfdd48603687743e5935b55505f3cc8fa1ba4721dd447a26 namespace=k8s.io Jan 13 20:46:00.504153 containerd[1494]: time="2025-01-13T20:46:00.504158787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:01.297815 kubelet[2681]: E0113 20:46:01.297776 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:01.300923 containerd[1494]: time="2025-01-13T20:46:01.300856469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:46:02.229859 kubelet[2681]: E0113 20:46:02.229816 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:04.038636 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:54226.service - OpenSSH per-connection server daemon (10.0.0.1:54226). Jan 13 20:46:04.080322 sshd[3391]: Accepted publickey for core from 10.0.0.1 port 54226 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:04.082329 sshd-session[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:04.086543 systemd-logind[1485]: New session 8 of user core. Jan 13 20:46:04.091680 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:46:04.211708 sshd[3393]: Connection closed by 10.0.0.1 port 54226 Jan 13 20:46:04.212128 sshd-session[3391]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:04.216780 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:54226.service: Deactivated successfully. Jan 13 20:46:04.218869 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:46:04.219669 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:46:04.220864 systemd-logind[1485]: Removed session 8. Jan 13 20:46:04.229526 kubelet[2681]: E0113 20:46:04.229409 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:06.229930 kubelet[2681]: E0113 20:46:06.229887 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:06.382194 containerd[1494]: time="2025-01-13T20:46:06.382144041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:06.382957 containerd[1494]: time="2025-01-13T20:46:06.382914741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:46:06.384212 containerd[1494]: time="2025-01-13T20:46:06.384147778Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:06.386249 containerd[1494]: time="2025-01-13T20:46:06.386211037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:06.386912 containerd[1494]: time="2025-01-13T20:46:06.386877510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.085974552s" Jan 13 20:46:06.386912 containerd[1494]: time="2025-01-13T20:46:06.386908538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:46:06.388520 containerd[1494]: time="2025-01-13T20:46:06.388494248Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:46:06.401694 containerd[1494]: time="2025-01-13T20:46:06.401654076Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e\"" Jan 13 20:46:06.402039 containerd[1494]: time="2025-01-13T20:46:06.402004754Z" level=info msg="StartContainer for \"97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e\"" Jan 13 20:46:06.437615 systemd[1]: Started cri-containerd-97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e.scope - libcontainer container 97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e. Jan 13 20:46:06.472043 containerd[1494]: time="2025-01-13T20:46:06.472002590Z" level=info msg="StartContainer for \"97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e\" returns successfully" Jan 13 20:46:07.309897 kubelet[2681]: E0113 20:46:07.309856 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:07.499631 containerd[1494]: time="2025-01-13T20:46:07.499579397Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:46:07.502607 systemd[1]: cri-containerd-97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e.scope: Deactivated successfully. Jan 13 20:46:07.524907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e-rootfs.mount: Deactivated successfully. Jan 13 20:46:07.578120 kubelet[2681]: I0113 20:46:07.577755 2681 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:46:07.700850 kubelet[2681]: I0113 20:46:07.700514 2681 topology_manager.go:215] "Topology Admit Handler" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" podNamespace="kube-system" podName="coredns-76f75df574-zjvgd" Jan 13 20:46:07.704185 kubelet[2681]: I0113 20:46:07.703578 2681 topology_manager.go:215] "Topology Admit Handler" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" podNamespace="calico-system" podName="calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:07.704185 kubelet[2681]: I0113 20:46:07.703680 2681 topology_manager.go:215] "Topology Admit Handler" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" podNamespace="calico-apiserver" podName="calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:07.704185 kubelet[2681]: I0113 20:46:07.703757 2681 topology_manager.go:215] "Topology Admit Handler" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" podNamespace="kube-system" podName="coredns-76f75df574-sj5ll" Jan 13 20:46:07.704544 kubelet[2681]: I0113 20:46:07.704526 2681 topology_manager.go:215] "Topology Admit Handler" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" podNamespace="calico-apiserver" podName="calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:07.730669 systemd[1]: Created slice kubepods-besteffort-pod20a94580_01ed_434d_8ae3_3bc7fd6089f3.slice - libcontainer container kubepods-besteffort-pod20a94580_01ed_434d_8ae3_3bc7fd6089f3.slice. Jan 13 20:46:07.735289 systemd[1]: Created slice kubepods-burstable-pod82db675e_45a2_40cb_aaa5_0e3781350d23.slice - libcontainer container kubepods-burstable-pod82db675e_45a2_40cb_aaa5_0e3781350d23.slice. Jan 13 20:46:07.737831 kubelet[2681]: I0113 20:46:07.737812 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a94580-01ed-434d-8ae3-3bc7fd6089f3-tigera-ca-bundle\") pod \"calico-kube-controllers-7995746cb4-vtxf5\" (UID: \"20a94580-01ed-434d-8ae3-3bc7fd6089f3\") " pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:07.737910 kubelet[2681]: I0113 20:46:07.737844 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/164635ec-fca2-4958-bf9f-f8a81545fa24-calico-apiserver-certs\") pod \"calico-apiserver-7fcd56cf7c-z54wj\" (UID: \"164635ec-fca2-4958-bf9f-f8a81545fa24\") " pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:07.737910 kubelet[2681]: I0113 20:46:07.737863 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmj56\" (UniqueName: \"kubernetes.io/projected/ea7e48ee-74c8-4c04-8866-2bd72cdc56d3-kube-api-access-kmj56\") pod \"coredns-76f75df574-sj5ll\" (UID: \"ea7e48ee-74c8-4c04-8866-2bd72cdc56d3\") " pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:07.737910 kubelet[2681]: I0113 20:46:07.737883 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2x2l\" (UniqueName: \"kubernetes.io/projected/164635ec-fca2-4958-bf9f-f8a81545fa24-kube-api-access-s2x2l\") pod \"calico-apiserver-7fcd56cf7c-z54wj\" (UID: \"164635ec-fca2-4958-bf9f-f8a81545fa24\") " pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:07.738019 kubelet[2681]: I0113 20:46:07.737994 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c89k\" (UniqueName: \"kubernetes.io/projected/20a94580-01ed-434d-8ae3-3bc7fd6089f3-kube-api-access-9c89k\") pod \"calico-kube-controllers-7995746cb4-vtxf5\" (UID: \"20a94580-01ed-434d-8ae3-3bc7fd6089f3\") " pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:07.738049 kubelet[2681]: I0113 20:46:07.738031 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea7e48ee-74c8-4c04-8866-2bd72cdc56d3-config-volume\") pod \"coredns-76f75df574-sj5ll\" (UID: \"ea7e48ee-74c8-4c04-8866-2bd72cdc56d3\") " pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:07.738075 kubelet[2681]: I0113 20:46:07.738053 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs277\" (UniqueName: \"kubernetes.io/projected/005c7b7f-f680-4342-abb9-808a0c23c33a-kube-api-access-fs277\") pod \"calico-apiserver-7fcd56cf7c-gf2sc\" (UID: \"005c7b7f-f680-4342-abb9-808a0c23c33a\") " pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:07.738155 kubelet[2681]: I0113 20:46:07.738139 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82db675e-45a2-40cb-aaa5-0e3781350d23-config-volume\") pod \"coredns-76f75df574-zjvgd\" (UID: \"82db675e-45a2-40cb-aaa5-0e3781350d23\") " pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:07.738189 kubelet[2681]: I0113 20:46:07.738179 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j564\" (UniqueName: \"kubernetes.io/projected/82db675e-45a2-40cb-aaa5-0e3781350d23-kube-api-access-9j564\") pod \"coredns-76f75df574-zjvgd\" (UID: \"82db675e-45a2-40cb-aaa5-0e3781350d23\") " pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:07.738245 kubelet[2681]: I0113 20:46:07.738234 2681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/005c7b7f-f680-4342-abb9-808a0c23c33a-calico-apiserver-certs\") pod \"calico-apiserver-7fcd56cf7c-gf2sc\" (UID: \"005c7b7f-f680-4342-abb9-808a0c23c33a\") " pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:07.740604 systemd[1]: Created slice kubepods-besteffort-pod005c7b7f_f680_4342_abb9_808a0c23c33a.slice - libcontainer container kubepods-besteffort-pod005c7b7f_f680_4342_abb9_808a0c23c33a.slice. Jan 13 20:46:07.746013 systemd[1]: Created slice kubepods-burstable-podea7e48ee_74c8_4c04_8866_2bd72cdc56d3.slice - libcontainer container kubepods-burstable-podea7e48ee_74c8_4c04_8866_2bd72cdc56d3.slice. Jan 13 20:46:07.750482 systemd[1]: Created slice kubepods-besteffort-pod164635ec_fca2_4958_bf9f_f8a81545fa24.slice - libcontainer container kubepods-besteffort-pod164635ec_fca2_4958_bf9f_f8a81545fa24.slice. Jan 13 20:46:07.948120 containerd[1494]: time="2025-01-13T20:46:07.948038711Z" level=info msg="shim disconnected" id=97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e namespace=k8s.io Jan 13 20:46:07.948120 containerd[1494]: time="2025-01-13T20:46:07.948098174Z" level=warning msg="cleaning up after shim disconnected" id=97a0a9c7bc86c907b67960f630f0d799f5a0a66eb55f8cca7317a5de6cfb192e namespace=k8s.io Jan 13 20:46:07.948120 containerd[1494]: time="2025-01-13T20:46:07.948109765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:08.034190 containerd[1494]: time="2025-01-13T20:46:08.034119935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:08.038642 kubelet[2681]: E0113 20:46:08.038593 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.039180 containerd[1494]: time="2025-01-13T20:46:08.039131192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:08.043134 containerd[1494]: time="2025-01-13T20:46:08.043092927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:08.049593 kubelet[2681]: E0113 20:46:08.049376 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.049820 containerd[1494]: time="2025-01-13T20:46:08.049780504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:08.054241 containerd[1494]: time="2025-01-13T20:46:08.054207783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:08.161288 containerd[1494]: time="2025-01-13T20:46:08.161210577Z" level=error msg="Failed to destroy network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.162044 containerd[1494]: time="2025-01-13T20:46:08.161991355Z" level=error msg="Failed to destroy network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.162555 containerd[1494]: time="2025-01-13T20:46:08.162508376Z" level=error msg="encountered an error cleaning up failed sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.162797 containerd[1494]: time="2025-01-13T20:46:08.162744089Z" level=error msg="encountered an error cleaning up failed sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.162887 containerd[1494]: time="2025-01-13T20:46:08.162848164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.163120 containerd[1494]: time="2025-01-13T20:46:08.162974943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.163437 kubelet[2681]: E0113 20:46:08.163398 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.163951 kubelet[2681]: E0113 20:46:08.163419 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.163951 kubelet[2681]: E0113 20:46:08.163585 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:08.163951 kubelet[2681]: E0113 20:46:08.163608 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:08.164063 kubelet[2681]: E0113 20:46:08.163666 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:08.164063 kubelet[2681]: E0113 20:46:08.163580 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:08.164063 kubelet[2681]: E0113 20:46:08.163707 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:08.165204 kubelet[2681]: E0113 20:46:08.165183 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:08.177941 containerd[1494]: time="2025-01-13T20:46:08.177870514Z" level=error msg="Failed to destroy network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.178489 containerd[1494]: time="2025-01-13T20:46:08.178396542Z" level=error msg="encountered an error cleaning up failed sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.178537 containerd[1494]: time="2025-01-13T20:46:08.178507841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.178865 kubelet[2681]: E0113 20:46:08.178832 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.178957 kubelet[2681]: E0113 20:46:08.178925 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:08.179022 kubelet[2681]: E0113 20:46:08.179003 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:08.179178 kubelet[2681]: E0113 20:46:08.179122 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:08.184525 containerd[1494]: time="2025-01-13T20:46:08.184483029Z" level=error msg="Failed to destroy network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.184993 containerd[1494]: time="2025-01-13T20:46:08.184962110Z" level=error msg="encountered an error cleaning up failed sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.185058 containerd[1494]: time="2025-01-13T20:46:08.185022874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.185243 kubelet[2681]: E0113 20:46:08.185217 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.185315 kubelet[2681]: E0113 20:46:08.185267 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:08.185315 kubelet[2681]: E0113 20:46:08.185293 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:08.185382 kubelet[2681]: E0113 20:46:08.185357 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:08.187539 containerd[1494]: time="2025-01-13T20:46:08.187479470Z" level=error msg="Failed to destroy network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.187894 containerd[1494]: time="2025-01-13T20:46:08.187859815Z" level=error msg="encountered an error cleaning up failed sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.187973 containerd[1494]: time="2025-01-13T20:46:08.187930507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.188181 kubelet[2681]: E0113 20:46:08.188159 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.188254 kubelet[2681]: E0113 20:46:08.188209 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:08.188254 kubelet[2681]: E0113 20:46:08.188232 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:08.188329 kubelet[2681]: E0113 20:46:08.188294 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:08.235796 systemd[1]: Created slice kubepods-besteffort-pod39e13210_d183_473d_999b_c81aa9bc8ccf.slice - libcontainer container kubepods-besteffort-pod39e13210_d183_473d_999b_c81aa9bc8ccf.slice. Jan 13 20:46:08.239179 containerd[1494]: time="2025-01-13T20:46:08.239136802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:08.304143 containerd[1494]: time="2025-01-13T20:46:08.304069019Z" level=error msg="Failed to destroy network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.304581 containerd[1494]: time="2025-01-13T20:46:08.304547098Z" level=error msg="encountered an error cleaning up failed sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.304619 containerd[1494]: time="2025-01-13T20:46:08.304605487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.304859 kubelet[2681]: E0113 20:46:08.304834 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.304915 kubelet[2681]: E0113 20:46:08.304890 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:08.304915 kubelet[2681]: E0113 20:46:08.304911 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:08.304986 kubelet[2681]: E0113 20:46:08.304975 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:08.312569 kubelet[2681]: I0113 20:46:08.312512 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528" Jan 13 20:46:08.313388 kubelet[2681]: I0113 20:46:08.313220 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61" Jan 13 20:46:08.313485 containerd[1494]: time="2025-01-13T20:46:08.313225786Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:08.313731 containerd[1494]: time="2025-01-13T20:46:08.313669590Z" level=info msg="Ensure that sandbox 62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528 in task-service has been cleanup successfully" Jan 13 20:46:08.313925 containerd[1494]: time="2025-01-13T20:46:08.313892669Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:08.313925 containerd[1494]: time="2025-01-13T20:46:08.313912236Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:08.314282 kubelet[2681]: I0113 20:46:08.314262 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c" Jan 13 20:46:08.314358 containerd[1494]: time="2025-01-13T20:46:08.314341221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:08.314821 containerd[1494]: time="2025-01-13T20:46:08.314662035Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:08.315053 containerd[1494]: time="2025-01-13T20:46:08.315017633Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:08.315053 containerd[1494]: time="2025-01-13T20:46:08.315034063Z" level=info msg="Ensure that sandbox d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c in task-service has been cleanup successfully" Jan 13 20:46:08.315327 containerd[1494]: time="2025-01-13T20:46:08.315302969Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:08.315387 containerd[1494]: time="2025-01-13T20:46:08.315333206Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:08.315426 containerd[1494]: time="2025-01-13T20:46:08.315314410Z" level=info msg="Ensure that sandbox 3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61 in task-service has been cleanup successfully" Jan 13 20:46:08.315608 kubelet[2681]: I0113 20:46:08.315558 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62" Jan 13 20:46:08.315911 containerd[1494]: time="2025-01-13T20:46:08.315876086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:08.315998 containerd[1494]: time="2025-01-13T20:46:08.315978178Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:08.316021 containerd[1494]: time="2025-01-13T20:46:08.315995079Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:08.316042 containerd[1494]: time="2025-01-13T20:46:08.316012873Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:08.316175 containerd[1494]: time="2025-01-13T20:46:08.316151804Z" level=info msg="Ensure that sandbox 8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62 in task-service has been cleanup successfully" Jan 13 20:46:08.317009 containerd[1494]: time="2025-01-13T20:46:08.316494007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:08.317009 containerd[1494]: time="2025-01-13T20:46:08.316624763Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:08.317009 containerd[1494]: time="2025-01-13T20:46:08.316643628Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:08.317133 kubelet[2681]: E0113 20:46:08.316832 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.317350 containerd[1494]: time="2025-01-13T20:46:08.317319569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:08.318206 kubelet[2681]: I0113 20:46:08.317608 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c" Jan 13 20:46:08.318385 containerd[1494]: time="2025-01-13T20:46:08.318353051Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:08.318602 containerd[1494]: time="2025-01-13T20:46:08.318563496Z" level=info msg="Ensure that sandbox db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c in task-service has been cleanup successfully" Jan 13 20:46:08.318801 containerd[1494]: time="2025-01-13T20:46:08.318773260Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:08.318801 containerd[1494]: time="2025-01-13T20:46:08.318797977Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:08.319673 containerd[1494]: time="2025-01-13T20:46:08.319642383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:08.320906 kubelet[2681]: I0113 20:46:08.320871 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6" Jan 13 20:46:08.321066 kubelet[2681]: E0113 20:46:08.321031 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.321233 containerd[1494]: time="2025-01-13T20:46:08.321200361Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:08.321544 containerd[1494]: time="2025-01-13T20:46:08.321395067Z" level=info msg="Ensure that sandbox abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6 in task-service has been cleanup successfully" Jan 13 20:46:08.322036 containerd[1494]: time="2025-01-13T20:46:08.321925364Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:08.322036 containerd[1494]: time="2025-01-13T20:46:08.321964206Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:08.322350 containerd[1494]: time="2025-01-13T20:46:08.322133224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:46:08.322384 kubelet[2681]: E0113 20:46:08.322173 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.322814 containerd[1494]: time="2025-01-13T20:46:08.322618306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:08.473384 containerd[1494]: time="2025-01-13T20:46:08.472939832Z" level=error msg="Failed to destroy network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.475273 containerd[1494]: time="2025-01-13T20:46:08.475192896Z" level=error msg="encountered an error cleaning up failed sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.475466 containerd[1494]: time="2025-01-13T20:46:08.475363476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.475586 containerd[1494]: time="2025-01-13T20:46:08.475543695Z" level=error msg="Failed to destroy network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.476352 kubelet[2681]: E0113 20:46:08.475971 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.476352 kubelet[2681]: E0113 20:46:08.476025 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:08.476352 kubelet[2681]: E0113 20:46:08.476046 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:08.476472 containerd[1494]: time="2025-01-13T20:46:08.475977300Z" level=error msg="encountered an error cleaning up failed sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.476472 containerd[1494]: time="2025-01-13T20:46:08.476043193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.476528 kubelet[2681]: E0113 20:46:08.476095 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:08.476528 kubelet[2681]: E0113 20:46:08.476292 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.476528 kubelet[2681]: E0113 20:46:08.476314 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:08.476616 kubelet[2681]: E0113 20:46:08.476332 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:08.476616 kubelet[2681]: E0113 20:46:08.476362 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:08.490200 containerd[1494]: time="2025-01-13T20:46:08.490075913Z" level=error msg="Failed to destroy network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.490733 containerd[1494]: time="2025-01-13T20:46:08.490709664Z" level=error msg="encountered an error cleaning up failed sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.490938 containerd[1494]: time="2025-01-13T20:46:08.490909289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.491348 kubelet[2681]: E0113 20:46:08.491315 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.491846 kubelet[2681]: E0113 20:46:08.491502 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:08.491846 kubelet[2681]: E0113 20:46:08.491529 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:08.491846 kubelet[2681]: E0113 20:46:08.491596 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:08.493819 containerd[1494]: time="2025-01-13T20:46:08.493570690Z" level=error msg="Failed to destroy network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.494030 containerd[1494]: time="2025-01-13T20:46:08.494003073Z" level=error msg="encountered an error cleaning up failed sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.494085 containerd[1494]: time="2025-01-13T20:46:08.494056333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.494260 kubelet[2681]: E0113 20:46:08.494215 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.494260 kubelet[2681]: E0113 20:46:08.494265 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:08.494353 kubelet[2681]: E0113 20:46:08.494283 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:08.494353 kubelet[2681]: E0113 20:46:08.494330 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:08.499211 containerd[1494]: time="2025-01-13T20:46:08.499158901Z" level=error msg="Failed to destroy network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.499689 containerd[1494]: time="2025-01-13T20:46:08.499645385Z" level=error msg="encountered an error cleaning up failed sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.500126 containerd[1494]: time="2025-01-13T20:46:08.499708083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.500163 kubelet[2681]: E0113 20:46:08.499908 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.500163 kubelet[2681]: E0113 20:46:08.499949 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:08.500163 kubelet[2681]: E0113 20:46:08.499970 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:08.500242 kubelet[2681]: E0113 20:46:08.500015 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:08.511292 containerd[1494]: time="2025-01-13T20:46:08.511242478Z" level=error msg="Failed to destroy network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.511669 containerd[1494]: time="2025-01-13T20:46:08.511645576Z" level=error msg="encountered an error cleaning up failed sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.511730 containerd[1494]: time="2025-01-13T20:46:08.511698435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.511903 kubelet[2681]: E0113 20:46:08.511882 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:08.511988 kubelet[2681]: E0113 20:46:08.511924 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:08.511988 kubelet[2681]: E0113 20:46:08.511954 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:08.512035 kubelet[2681]: E0113 20:46:08.512009 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:09.227625 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:54236.service - OpenSSH per-connection server daemon (10.0.0.1:54236). Jan 13 20:46:09.278040 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:09.280695 sshd-session[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:09.285371 systemd-logind[1485]: New session 9 of user core. Jan 13 20:46:09.295645 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:46:09.323836 kubelet[2681]: I0113 20:46:09.323798 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1" Jan 13 20:46:09.324565 containerd[1494]: time="2025-01-13T20:46:09.324512767Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:09.324801 containerd[1494]: time="2025-01-13T20:46:09.324755022Z" level=info msg="Ensure that sandbox 87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1 in task-service has been cleanup successfully" Jan 13 20:46:09.326751 containerd[1494]: time="2025-01-13T20:46:09.326710286Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:09.326751 containerd[1494]: time="2025-01-13T20:46:09.326738920Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:09.327245 containerd[1494]: time="2025-01-13T20:46:09.327217459Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:09.327842 containerd[1494]: time="2025-01-13T20:46:09.327306637Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:09.327842 containerd[1494]: time="2025-01-13T20:46:09.327316064Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:09.327842 containerd[1494]: time="2025-01-13T20:46:09.327826614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:09.327764 systemd[1]: run-netns-cni\x2d12c12fe7\x2d5e32\x2d0d95\x2d0565\x2d1dcec97bd198.mount: Deactivated successfully. Jan 13 20:46:09.328063 kubelet[2681]: I0113 20:46:09.327431 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451" Jan 13 20:46:09.328116 containerd[1494]: time="2025-01-13T20:46:09.327974502Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:09.328320 containerd[1494]: time="2025-01-13T20:46:09.328292369Z" level=info msg="Ensure that sandbox 5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451 in task-service has been cleanup successfully" Jan 13 20:46:09.329257 containerd[1494]: time="2025-01-13T20:46:09.328899008Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:09.329257 containerd[1494]: time="2025-01-13T20:46:09.328924757Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:09.329705 containerd[1494]: time="2025-01-13T20:46:09.329427080Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:09.329705 containerd[1494]: time="2025-01-13T20:46:09.329576281Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:09.329705 containerd[1494]: time="2025-01-13T20:46:09.329588835Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:09.329833 kubelet[2681]: I0113 20:46:09.329702 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991" Jan 13 20:46:09.330546 containerd[1494]: time="2025-01-13T20:46:09.330135301Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:09.330546 containerd[1494]: time="2025-01-13T20:46:09.330331210Z" level=info msg="Ensure that sandbox 93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991 in task-service has been cleanup successfully" Jan 13 20:46:09.330629 kubelet[2681]: E0113 20:46:09.330184 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:09.330672 containerd[1494]: time="2025-01-13T20:46:09.330603521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:09.331015 containerd[1494]: time="2025-01-13T20:46:09.330867607Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:09.331015 containerd[1494]: time="2025-01-13T20:46:09.330885791Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:09.331280 systemd[1]: run-netns-cni\x2da7d292eb\x2df5ff\x2dc869\x2d9709\x2d6a339a9e3e43.mount: Deactivated successfully. Jan 13 20:46:09.331817 containerd[1494]: time="2025-01-13T20:46:09.331772447Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:09.332423 containerd[1494]: time="2025-01-13T20:46:09.331867326Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:09.332423 containerd[1494]: time="2025-01-13T20:46:09.331878106Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:09.332988 kubelet[2681]: E0113 20:46:09.332717 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:09.333615 containerd[1494]: time="2025-01-13T20:46:09.333439580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:09.333955 kubelet[2681]: I0113 20:46:09.333926 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab" Jan 13 20:46:09.334343 systemd[1]: run-netns-cni\x2d9025d314\x2d9e50\x2d6b47\x2dbefd\x2dd79bdadae8e3.mount: Deactivated successfully. Jan 13 20:46:09.335369 containerd[1494]: time="2025-01-13T20:46:09.334597284Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:09.335683 containerd[1494]: time="2025-01-13T20:46:09.335615488Z" level=info msg="Ensure that sandbox 9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab in task-service has been cleanup successfully" Jan 13 20:46:09.335892 containerd[1494]: time="2025-01-13T20:46:09.335833568Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:09.335892 containerd[1494]: time="2025-01-13T20:46:09.335884663Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:09.336215 containerd[1494]: time="2025-01-13T20:46:09.336178286Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:09.336471 containerd[1494]: time="2025-01-13T20:46:09.336307208Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:09.336471 containerd[1494]: time="2025-01-13T20:46:09.336317417Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:09.337752 containerd[1494]: time="2025-01-13T20:46:09.337440827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:09.338128 systemd[1]: run-netns-cni\x2d23a1639e\x2dd2eb\x2d6de0\x2d4a18\x2d8e16f317574d.mount: Deactivated successfully. Jan 13 20:46:09.338377 kubelet[2681]: I0113 20:46:09.338346 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09" Jan 13 20:46:09.338861 containerd[1494]: time="2025-01-13T20:46:09.338829687Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:09.339065 containerd[1494]: time="2025-01-13T20:46:09.339044150Z" level=info msg="Ensure that sandbox 23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09 in task-service has been cleanup successfully" Jan 13 20:46:09.339584 containerd[1494]: time="2025-01-13T20:46:09.339551624Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.339625562Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.339995557Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.340106146Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.340120873Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.340675635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:09.341137 containerd[1494]: time="2025-01-13T20:46:09.341007408Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:09.341354 kubelet[2681]: I0113 20:46:09.340615 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083" Jan 13 20:46:09.341404 containerd[1494]: time="2025-01-13T20:46:09.341381031Z" level=info msg="Ensure that sandbox e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083 in task-service has been cleanup successfully" Jan 13 20:46:09.341590 containerd[1494]: time="2025-01-13T20:46:09.341560829Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:09.341590 containerd[1494]: time="2025-01-13T20:46:09.341575897Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:09.341795 containerd[1494]: time="2025-01-13T20:46:09.341774309Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:09.341873 containerd[1494]: time="2025-01-13T20:46:09.341851605Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:09.341873 containerd[1494]: time="2025-01-13T20:46:09.341863337Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:09.342261 containerd[1494]: time="2025-01-13T20:46:09.342239263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:09.448711 sshd[3935]: Connection closed by 10.0.0.1 port 54236 Jan 13 20:46:09.449626 sshd-session[3933]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:09.459118 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:54236.service: Deactivated successfully. Jan 13 20:46:09.462030 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:46:09.464823 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:46:09.467821 systemd-logind[1485]: Removed session 9. Jan 13 20:46:09.498444 containerd[1494]: time="2025-01-13T20:46:09.497099047Z" level=error msg="Failed to destroy network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.498975 containerd[1494]: time="2025-01-13T20:46:09.498932592Z" level=error msg="encountered an error cleaning up failed sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.499097 containerd[1494]: time="2025-01-13T20:46:09.499018574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.499344 kubelet[2681]: E0113 20:46:09.499307 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.499407 kubelet[2681]: E0113 20:46:09.499385 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:09.499437 kubelet[2681]: E0113 20:46:09.499414 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:09.499847 kubelet[2681]: E0113 20:46:09.499811 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:09.501074 containerd[1494]: time="2025-01-13T20:46:09.501028480Z" level=error msg="Failed to destroy network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.502566 containerd[1494]: time="2025-01-13T20:46:09.502103080Z" level=error msg="encountered an error cleaning up failed sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.502566 containerd[1494]: time="2025-01-13T20:46:09.502172019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.502669 kubelet[2681]: E0113 20:46:09.502373 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.502669 kubelet[2681]: E0113 20:46:09.502421 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:09.502669 kubelet[2681]: E0113 20:46:09.502536 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:09.502927 kubelet[2681]: E0113 20:46:09.502887 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:09.519184 containerd[1494]: time="2025-01-13T20:46:09.518993374Z" level=error msg="Failed to destroy network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.520900 containerd[1494]: time="2025-01-13T20:46:09.520866283Z" level=error msg="encountered an error cleaning up failed sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.521070 containerd[1494]: time="2025-01-13T20:46:09.521040079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.521663 kubelet[2681]: E0113 20:46:09.521427 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.521663 kubelet[2681]: E0113 20:46:09.521519 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:09.521663 kubelet[2681]: E0113 20:46:09.521563 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:09.521812 kubelet[2681]: E0113 20:46:09.521627 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:09.526762 containerd[1494]: time="2025-01-13T20:46:09.526630624Z" level=error msg="Failed to destroy network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.527432 containerd[1494]: time="2025-01-13T20:46:09.527316162Z" level=error msg="encountered an error cleaning up failed sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.527432 containerd[1494]: time="2025-01-13T20:46:09.527383638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.528023 kubelet[2681]: E0113 20:46:09.527822 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.528023 kubelet[2681]: E0113 20:46:09.527887 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:09.528023 kubelet[2681]: E0113 20:46:09.527926 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:09.528156 kubelet[2681]: E0113 20:46:09.527991 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:09.530624 systemd[1]: run-netns-cni\x2d7e5eebc5\x2d8f49\x2dd14e\x2dbe55\x2d259ae210fdc0.mount: Deactivated successfully. Jan 13 20:46:09.531082 containerd[1494]: time="2025-01-13T20:46:09.531033155Z" level=error msg="Failed to destroy network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.531350 systemd[1]: run-netns-cni\x2dba0225b2\x2d3ee5\x2d6725\x2d85e2\x2d4a2292a86439.mount: Deactivated successfully. Jan 13 20:46:09.531871 containerd[1494]: time="2025-01-13T20:46:09.531543344Z" level=error msg="encountered an error cleaning up failed sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.531871 containerd[1494]: time="2025-01-13T20:46:09.531593478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.531952 kubelet[2681]: E0113 20:46:09.531894 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.531992 kubelet[2681]: E0113 20:46:09.531967 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:09.532036 kubelet[2681]: E0113 20:46:09.531996 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:09.532069 kubelet[2681]: E0113 20:46:09.532053 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:09.532265 containerd[1494]: time="2025-01-13T20:46:09.532238851Z" level=error msg="Failed to destroy network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.532591 containerd[1494]: time="2025-01-13T20:46:09.532565524Z" level=error msg="encountered an error cleaning up failed sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.532691 containerd[1494]: time="2025-01-13T20:46:09.532670773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.532942 kubelet[2681]: E0113 20:46:09.532925 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.533020 kubelet[2681]: E0113 20:46:09.533009 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:09.533086 kubelet[2681]: E0113 20:46:09.533076 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:09.533166 kubelet[2681]: E0113 20:46:09.533155 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:09.536221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7-shm.mount: Deactivated successfully. Jan 13 20:46:09.536365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558-shm.mount: Deactivated successfully. Jan 13 20:46:09.536561 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb-shm.mount: Deactivated successfully. Jan 13 20:46:10.344796 kubelet[2681]: I0113 20:46:10.344755 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb" Jan 13 20:46:10.345785 containerd[1494]: time="2025-01-13T20:46:10.345418570Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:10.345939 containerd[1494]: time="2025-01-13T20:46:10.345832287Z" level=info msg="Ensure that sandbox d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb in task-service has been cleanup successfully" Jan 13 20:46:10.348512 containerd[1494]: time="2025-01-13T20:46:10.348470453Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:10.348512 containerd[1494]: time="2025-01-13T20:46:10.348496061Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:10.349012 containerd[1494]: time="2025-01-13T20:46:10.348976765Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:10.349539 kubelet[2681]: I0113 20:46:10.349510 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558" Jan 13 20:46:10.350623 systemd[1]: run-netns-cni\x2da41e3af4\x2da876\x2dd65d\x2dd27c\x2d122dea0f2d3e.mount: Deactivated successfully. Jan 13 20:46:10.352695 kubelet[2681]: I0113 20:46:10.352186 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7" Jan 13 20:46:10.354950 kubelet[2681]: I0113 20:46:10.354924 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11" Jan 13 20:46:10.360175 kubelet[2681]: I0113 20:46:10.360132 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193" Jan 13 20:46:10.361508 containerd[1494]: time="2025-01-13T20:46:10.349069619Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:10.361602 containerd[1494]: time="2025-01-13T20:46:10.361520930Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:10.361602 containerd[1494]: time="2025-01-13T20:46:10.350193600Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:10.362211 containerd[1494]: time="2025-01-13T20:46:10.353012696Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:10.362211 containerd[1494]: time="2025-01-13T20:46:10.362014848Z" level=info msg="Ensure that sandbox e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7 in task-service has been cleanup successfully" Jan 13 20:46:10.363744 containerd[1494]: time="2025-01-13T20:46:10.361769156Z" level=info msg="Ensure that sandbox b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558 in task-service has been cleanup successfully" Jan 13 20:46:10.363806 containerd[1494]: time="2025-01-13T20:46:10.363748836Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:10.364177 containerd[1494]: time="2025-01-13T20:46:10.364143808Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:10.365123 containerd[1494]: time="2025-01-13T20:46:10.364624862Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:10.365309 containerd[1494]: time="2025-01-13T20:46:10.365179313Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:10.365309 containerd[1494]: time="2025-01-13T20:46:10.364229108Z" level=info msg="Ensure that sandbox 688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193 in task-service has been cleanup successfully" Jan 13 20:46:10.366082 containerd[1494]: time="2025-01-13T20:46:10.356662884Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:10.366082 containerd[1494]: time="2025-01-13T20:46:10.365809467Z" level=info msg="Ensure that sandbox 46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11 in task-service has been cleanup successfully" Jan 13 20:46:10.366474 containerd[1494]: time="2025-01-13T20:46:10.364438912Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:10.366674 kubelet[2681]: I0113 20:46:10.366629 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0" Jan 13 20:46:10.369358 kubelet[2681]: E0113 20:46:10.367164 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:10.367884 systemd[1]: run-netns-cni\x2decf5205f\x2dfc65\x2d1210\x2d9750\x2de0f677fa56e8.mount: Deactivated successfully. Jan 13 20:46:10.370157 containerd[1494]: time="2025-01-13T20:46:10.370135935Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:10.370347 containerd[1494]: time="2025-01-13T20:46:10.366785551Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:10.370492 containerd[1494]: time="2025-01-13T20:46:10.370472948Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:10.370624 containerd[1494]: time="2025-01-13T20:46:10.369654801Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:10.370831 containerd[1494]: time="2025-01-13T20:46:10.370812666Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:10.370927 containerd[1494]: time="2025-01-13T20:46:10.369990261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:10.371622 containerd[1494]: time="2025-01-13T20:46:10.370049172Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:10.372138 systemd[1]: run-netns-cni\x2dac708d59\x2d799a\x2d1c8f\x2d5aa1\x2da71076d7f4e2.mount: Deactivated successfully. Jan 13 20:46:10.372354 containerd[1494]: time="2025-01-13T20:46:10.372270475Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:10.372418 containerd[1494]: time="2025-01-13T20:46:10.372355735Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:10.372418 containerd[1494]: time="2025-01-13T20:46:10.372365463Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:10.372609 containerd[1494]: time="2025-01-13T20:46:10.372577141Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:10.372661 containerd[1494]: time="2025-01-13T20:46:10.372623809Z" level=info msg="Ensure that sandbox 935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0 in task-service has been cleanup successfully" Jan 13 20:46:10.372698 containerd[1494]: time="2025-01-13T20:46:10.372663142Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:10.373057 containerd[1494]: time="2025-01-13T20:46:10.372675946Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:10.373057 containerd[1494]: time="2025-01-13T20:46:10.372879519Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:10.373057 containerd[1494]: time="2025-01-13T20:46:10.372908233Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:10.373057 containerd[1494]: time="2025-01-13T20:46:10.373007359Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:10.372813 systemd[1]: run-netns-cni\x2d8ef97124\x2da1c1\x2d7030\x2d74da\x2d8df0048a651b.mount: Deactivated successfully. Jan 13 20:46:10.373271 containerd[1494]: time="2025-01-13T20:46:10.373196865Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:10.373271 containerd[1494]: time="2025-01-13T20:46:10.373238453Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:10.373271 containerd[1494]: time="2025-01-13T20:46:10.373251988Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:10.373371 containerd[1494]: time="2025-01-13T20:46:10.373274741Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:10.373371 containerd[1494]: time="2025-01-13T20:46:10.373286203Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:10.373371 containerd[1494]: time="2025-01-13T20:46:10.373322220Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:10.374269 containerd[1494]: time="2025-01-13T20:46:10.373398123Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:10.374269 containerd[1494]: time="2025-01-13T20:46:10.373408272Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:10.374269 containerd[1494]: time="2025-01-13T20:46:10.373440453Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:10.374269 containerd[1494]: time="2025-01-13T20:46:10.374142661Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:10.374269 containerd[1494]: time="2025-01-13T20:46:10.374156247Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:10.374498 kubelet[2681]: E0113 20:46:10.374336 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:10.375168 containerd[1494]: time="2025-01-13T20:46:10.375115780Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:10.375250 containerd[1494]: time="2025-01-13T20:46:10.375209436Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:10.375250 containerd[1494]: time="2025-01-13T20:46:10.375222390Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:10.375310 containerd[1494]: time="2025-01-13T20:46:10.375292292Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:10.375397 containerd[1494]: time="2025-01-13T20:46:10.375371220Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:10.375397 containerd[1494]: time="2025-01-13T20:46:10.375391488Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:10.375558 containerd[1494]: time="2025-01-13T20:46:10.375501604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:10.375773 containerd[1494]: time="2025-01-13T20:46:10.375747767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:10.376437 containerd[1494]: time="2025-01-13T20:46:10.376413779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:10.376698 containerd[1494]: time="2025-01-13T20:46:10.376663628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:10.376773 containerd[1494]: time="2025-01-13T20:46:10.376698063Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:10.376773 containerd[1494]: time="2025-01-13T20:46:10.376726887Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:10.377101 containerd[1494]: time="2025-01-13T20:46:10.377067617Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:10.377210 containerd[1494]: time="2025-01-13T20:46:10.377159118Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:10.377210 containerd[1494]: time="2025-01-13T20:46:10.377176432Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:10.377519 containerd[1494]: time="2025-01-13T20:46:10.377486945Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:10.377643 containerd[1494]: time="2025-01-13T20:46:10.377563318Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:10.377643 containerd[1494]: time="2025-01-13T20:46:10.377573207Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:10.378219 containerd[1494]: time="2025-01-13T20:46:10.377987484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:10.524851 systemd[1]: run-netns-cni\x2dcb46a892\x2d1192\x2dccf6\x2dc60e\x2d9e1138a02c4f.mount: Deactivated successfully. Jan 13 20:46:10.525150 systemd[1]: run-netns-cni\x2d26588a19\x2db5be\x2dc9b4\x2d263d\x2d73963b814f66.mount: Deactivated successfully. Jan 13 20:46:10.685411 containerd[1494]: time="2025-01-13T20:46:10.685239902Z" level=error msg="Failed to destroy network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.687706 containerd[1494]: time="2025-01-13T20:46:10.685688866Z" level=error msg="encountered an error cleaning up failed sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.687706 containerd[1494]: time="2025-01-13T20:46:10.685744080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.687784 kubelet[2681]: E0113 20:46:10.686013 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.687784 kubelet[2681]: E0113 20:46:10.686074 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:10.687784 kubelet[2681]: E0113 20:46:10.686095 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:10.687926 kubelet[2681]: E0113 20:46:10.686157 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:10.702646 containerd[1494]: time="2025-01-13T20:46:10.702587442Z" level=error msg="Failed to destroy network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.703037 containerd[1494]: time="2025-01-13T20:46:10.703006840Z" level=error msg="encountered an error cleaning up failed sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.703118 containerd[1494]: time="2025-01-13T20:46:10.703075319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.703352 kubelet[2681]: E0113 20:46:10.703317 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.703435 kubelet[2681]: E0113 20:46:10.703379 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:10.703435 kubelet[2681]: E0113 20:46:10.703402 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:10.703534 kubelet[2681]: E0113 20:46:10.703472 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:10.706745 containerd[1494]: time="2025-01-13T20:46:10.706699287Z" level=error msg="Failed to destroy network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.706897 containerd[1494]: time="2025-01-13T20:46:10.706847726Z" level=error msg="Failed to destroy network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.707339 containerd[1494]: time="2025-01-13T20:46:10.707305526Z" level=error msg="encountered an error cleaning up failed sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.707503 containerd[1494]: time="2025-01-13T20:46:10.707476186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.707766 kubelet[2681]: E0113 20:46:10.707719 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.707836 kubelet[2681]: E0113 20:46:10.707796 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:10.707836 kubelet[2681]: E0113 20:46:10.707816 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:10.708210 containerd[1494]: time="2025-01-13T20:46:10.708179809Z" level=error msg="encountered an error cleaning up failed sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.708326 containerd[1494]: time="2025-01-13T20:46:10.708298521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.708402 kubelet[2681]: E0113 20:46:10.708333 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:10.708859 kubelet[2681]: E0113 20:46:10.708833 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.709025 kubelet[2681]: E0113 20:46:10.709007 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:10.709125 kubelet[2681]: E0113 20:46:10.709112 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:10.709264 kubelet[2681]: E0113 20:46:10.709244 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:10.726837 containerd[1494]: time="2025-01-13T20:46:10.726766827Z" level=error msg="Failed to destroy network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.728369 containerd[1494]: time="2025-01-13T20:46:10.728336185Z" level=error msg="encountered an error cleaning up failed sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.728546 containerd[1494]: time="2025-01-13T20:46:10.728515312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.729171 kubelet[2681]: E0113 20:46:10.729149 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.729290 kubelet[2681]: E0113 20:46:10.729278 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:10.729367 kubelet[2681]: E0113 20:46:10.729356 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:10.729490 kubelet[2681]: E0113 20:46:10.729475 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:10.732218 containerd[1494]: time="2025-01-13T20:46:10.732160309Z" level=error msg="Failed to destroy network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.732702 containerd[1494]: time="2025-01-13T20:46:10.732647174Z" level=error msg="encountered an error cleaning up failed sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.733069 containerd[1494]: time="2025-01-13T20:46:10.732943009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.733265 kubelet[2681]: E0113 20:46:10.733217 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:10.733318 kubelet[2681]: E0113 20:46:10.733269 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:10.733318 kubelet[2681]: E0113 20:46:10.733291 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:10.733376 kubelet[2681]: E0113 20:46:10.733344 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:11.369884 kubelet[2681]: I0113 20:46:11.369829 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c" Jan 13 20:46:11.370900 containerd[1494]: time="2025-01-13T20:46:11.370350928Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:11.370900 containerd[1494]: time="2025-01-13T20:46:11.370579587Z" level=info msg="Ensure that sandbox 6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c in task-service has been cleanup successfully" Jan 13 20:46:11.371114 containerd[1494]: time="2025-01-13T20:46:11.371081620Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:11.371114 containerd[1494]: time="2025-01-13T20:46:11.371103391Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:11.371177 kubelet[2681]: I0113 20:46:11.371139 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28" Jan 13 20:46:11.371492 containerd[1494]: time="2025-01-13T20:46:11.371436246Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:11.371620 containerd[1494]: time="2025-01-13T20:46:11.371603511Z" level=info msg="Ensure that sandbox 7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28 in task-service has been cleanup successfully" Jan 13 20:46:11.371810 containerd[1494]: time="2025-01-13T20:46:11.371638125Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:11.371810 containerd[1494]: time="2025-01-13T20:46:11.371761036Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:11.371810 containerd[1494]: time="2025-01-13T20:46:11.371772497Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:11.371810 containerd[1494]: time="2025-01-13T20:46:11.371803346Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:11.371933 containerd[1494]: time="2025-01-13T20:46:11.371818865Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:11.372442 containerd[1494]: time="2025-01-13T20:46:11.372414875Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:11.372652 containerd[1494]: time="2025-01-13T20:46:11.372529971Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:11.372652 containerd[1494]: time="2025-01-13T20:46:11.372549137Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:11.372652 containerd[1494]: time="2025-01-13T20:46:11.372634947Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:11.372733 containerd[1494]: time="2025-01-13T20:46:11.372715600Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:11.372733 containerd[1494]: time="2025-01-13T20:46:11.372727342Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:11.373071 containerd[1494]: time="2025-01-13T20:46:11.373042043Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:11.373146 containerd[1494]: time="2025-01-13T20:46:11.373084933Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:11.373193 containerd[1494]: time="2025-01-13T20:46:11.373172197Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:11.373193 containerd[1494]: time="2025-01-13T20:46:11.373185231Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:11.373327 containerd[1494]: time="2025-01-13T20:46:11.373303714Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:11.373403 containerd[1494]: time="2025-01-13T20:46:11.373348538Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:11.373446 containerd[1494]: time="2025-01-13T20:46:11.373427837Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:11.373542 containerd[1494]: time="2025-01-13T20:46:11.373514339Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:11.373542 containerd[1494]: time="2025-01-13T20:46:11.373528255Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:11.373750 kubelet[2681]: I0113 20:46:11.373660 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8" Jan 13 20:46:11.373980 containerd[1494]: time="2025-01-13T20:46:11.373924090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:11.374386 containerd[1494]: time="2025-01-13T20:46:11.374095301Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:11.374386 containerd[1494]: time="2025-01-13T20:46:11.374271151Z" level=info msg="Ensure that sandbox 97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8 in task-service has been cleanup successfully" Jan 13 20:46:11.374541 containerd[1494]: time="2025-01-13T20:46:11.374524257Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:11.374609 containerd[1494]: time="2025-01-13T20:46:11.374592825Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:11.374845 containerd[1494]: time="2025-01-13T20:46:11.374824791Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:11.374927 containerd[1494]: time="2025-01-13T20:46:11.374911284Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:11.375082 containerd[1494]: time="2025-01-13T20:46:11.375055335Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:11.375302 containerd[1494]: time="2025-01-13T20:46:11.375284686Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:11.375523 containerd[1494]: time="2025-01-13T20:46:11.375360929Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:11.375523 containerd[1494]: time="2025-01-13T20:46:11.375374845Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:11.375696 containerd[1494]: time="2025-01-13T20:46:11.375676892Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:11.375761 containerd[1494]: time="2025-01-13T20:46:11.375747585Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:11.375789 containerd[1494]: time="2025-01-13T20:46:11.375759687Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:11.375940 kubelet[2681]: I0113 20:46:11.375912 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1" Jan 13 20:46:11.376375 containerd[1494]: time="2025-01-13T20:46:11.376350818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:11.376537 containerd[1494]: time="2025-01-13T20:46:11.376349866Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:11.376627 containerd[1494]: time="2025-01-13T20:46:11.376608392Z" level=info msg="Ensure that sandbox 460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1 in task-service has been cleanup successfully" Jan 13 20:46:11.376876 containerd[1494]: time="2025-01-13T20:46:11.376845207Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:11.376876 containerd[1494]: time="2025-01-13T20:46:11.376863050Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:11.377136 containerd[1494]: time="2025-01-13T20:46:11.377106798Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:11.377197 containerd[1494]: time="2025-01-13T20:46:11.377183181Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:11.377230 containerd[1494]: time="2025-01-13T20:46:11.377196176Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:11.377512 containerd[1494]: time="2025-01-13T20:46:11.377488635Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:11.377602 containerd[1494]: time="2025-01-13T20:46:11.377585156Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:11.377634 containerd[1494]: time="2025-01-13T20:46:11.377600976Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:11.378605 containerd[1494]: time="2025-01-13T20:46:11.378544689Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:11.378796 containerd[1494]: time="2025-01-13T20:46:11.378716251Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:11.378796 containerd[1494]: time="2025-01-13T20:46:11.378730287Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:11.378942 kubelet[2681]: E0113 20:46:11.378924 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:11.378983 kubelet[2681]: I0113 20:46:11.378949 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9" Jan 13 20:46:11.379110 containerd[1494]: time="2025-01-13T20:46:11.379091547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:11.379366 containerd[1494]: time="2025-01-13T20:46:11.379347196Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:11.379527 containerd[1494]: time="2025-01-13T20:46:11.379502188Z" level=info msg="Ensure that sandbox 23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9 in task-service has been cleanup successfully" Jan 13 20:46:11.380106 containerd[1494]: time="2025-01-13T20:46:11.380085533Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:11.380106 containerd[1494]: time="2025-01-13T20:46:11.380102465Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:11.380311 containerd[1494]: time="2025-01-13T20:46:11.380291831Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:11.380374 containerd[1494]: time="2025-01-13T20:46:11.380359107Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:11.380374 containerd[1494]: time="2025-01-13T20:46:11.380370509Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:11.381722 containerd[1494]: time="2025-01-13T20:46:11.381583186Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:11.381722 containerd[1494]: time="2025-01-13T20:46:11.381675700Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:11.381722 containerd[1494]: time="2025-01-13T20:46:11.381687612Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:11.525327 systemd[1]: run-netns-cni\x2d512001ec\x2dce45\x2d737d\x2de692\x2d845aa03732dd.mount: Deactivated successfully. Jan 13 20:46:11.525439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28-shm.mount: Deactivated successfully. Jan 13 20:46:11.525533 systemd[1]: run-netns-cni\x2d0b29b9ba\x2dd267\x2d4bf9\x2d05b8\x2d15ff613f6e15.mount: Deactivated successfully. Jan 13 20:46:11.525606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8-shm.mount: Deactivated successfully. Jan 13 20:46:11.525678 systemd[1]: run-netns-cni\x2d7c812de1\x2d7f6b\x2d8ae8\x2d732a\x2dace68d52253a.mount: Deactivated successfully. Jan 13 20:46:11.525747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9-shm.mount: Deactivated successfully. Jan 13 20:46:11.525817 systemd[1]: run-netns-cni\x2df20eea9c\x2d9b9b\x2deb0f\x2d8465\x2d1b680c3e8fee.mount: Deactivated successfully. Jan 13 20:46:11.525895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1-shm.mount: Deactivated successfully. Jan 13 20:46:12.071366 containerd[1494]: time="2025-01-13T20:46:12.070673310Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:12.071366 containerd[1494]: time="2025-01-13T20:46:12.070799076Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:12.071366 containerd[1494]: time="2025-01-13T20:46:12.070813963Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:12.072022 kubelet[2681]: E0113 20:46:12.071144 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:12.072068 containerd[1494]: time="2025-01-13T20:46:12.072024016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:12.072378 kubelet[2681]: I0113 20:46:12.072334 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2" Jan 13 20:46:12.073153 containerd[1494]: time="2025-01-13T20:46:12.073103363Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:12.073561 containerd[1494]: time="2025-01-13T20:46:12.073516650Z" level=info msg="Ensure that sandbox ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2 in task-service has been cleanup successfully" Jan 13 20:46:12.074687 containerd[1494]: time="2025-01-13T20:46:12.074363259Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:12.074687 containerd[1494]: time="2025-01-13T20:46:12.074411249Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:12.075589 containerd[1494]: time="2025-01-13T20:46:12.075341887Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:12.075589 containerd[1494]: time="2025-01-13T20:46:12.075518469Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:12.075589 containerd[1494]: time="2025-01-13T20:46:12.075533868Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:12.075707 containerd[1494]: time="2025-01-13T20:46:12.075639316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:12.076048 containerd[1494]: time="2025-01-13T20:46:12.076023138Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:12.076143 containerd[1494]: time="2025-01-13T20:46:12.076123376Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:12.076143 containerd[1494]: time="2025-01-13T20:46:12.076138384Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:12.076567 containerd[1494]: time="2025-01-13T20:46:12.076511865Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:12.076630 containerd[1494]: time="2025-01-13T20:46:12.076590693Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:12.076630 containerd[1494]: time="2025-01-13T20:46:12.076600182Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:12.076767 systemd[1]: run-netns-cni\x2db7d02b7a\x2d2bb5\x2d8d60\x2d2a2e\x2d8dea5af72b9a.mount: Deactivated successfully. Jan 13 20:46:12.077231 containerd[1494]: time="2025-01-13T20:46:12.077130206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:12.275303 containerd[1494]: time="2025-01-13T20:46:12.274556053Z" level=error msg="Failed to destroy network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.276192 containerd[1494]: time="2025-01-13T20:46:12.276150688Z" level=error msg="encountered an error cleaning up failed sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.276267 containerd[1494]: time="2025-01-13T20:46:12.276228694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.276552 kubelet[2681]: E0113 20:46:12.276519 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.276686 kubelet[2681]: E0113 20:46:12.276596 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:12.276686 kubelet[2681]: E0113 20:46:12.276625 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:12.276758 kubelet[2681]: E0113 20:46:12.276687 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:12.322673 containerd[1494]: time="2025-01-13T20:46:12.321623094Z" level=error msg="Failed to destroy network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.322673 containerd[1494]: time="2025-01-13T20:46:12.322401986Z" level=error msg="encountered an error cleaning up failed sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.324661 containerd[1494]: time="2025-01-13T20:46:12.324627757Z" level=error msg="Failed to destroy network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.325031 containerd[1494]: time="2025-01-13T20:46:12.324799580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.325384 kubelet[2681]: E0113 20:46:12.325339 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.325471 kubelet[2681]: E0113 20:46:12.325417 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:12.325471 kubelet[2681]: E0113 20:46:12.325446 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:12.325564 kubelet[2681]: E0113 20:46:12.325526 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:12.326053 containerd[1494]: time="2025-01-13T20:46:12.326023910Z" level=error msg="encountered an error cleaning up failed sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.326599 containerd[1494]: time="2025-01-13T20:46:12.326367164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.328231 kubelet[2681]: E0113 20:46:12.328199 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.328295 kubelet[2681]: E0113 20:46:12.328248 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:12.328295 kubelet[2681]: E0113 20:46:12.328270 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:12.328366 kubelet[2681]: E0113 20:46:12.328322 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:12.329679 containerd[1494]: time="2025-01-13T20:46:12.329651462Z" level=error msg="Failed to destroy network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.330248 containerd[1494]: time="2025-01-13T20:46:12.330217816Z" level=error msg="encountered an error cleaning up failed sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.331072 containerd[1494]: time="2025-01-13T20:46:12.331043006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.331433 kubelet[2681]: E0113 20:46:12.331407 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.331619 kubelet[2681]: E0113 20:46:12.331602 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:12.331825 kubelet[2681]: E0113 20:46:12.331800 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:12.331980 kubelet[2681]: E0113 20:46:12.331964 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:12.343790 containerd[1494]: time="2025-01-13T20:46:12.343548714Z" level=error msg="Failed to destroy network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.344335 containerd[1494]: time="2025-01-13T20:46:12.344260972Z" level=error msg="encountered an error cleaning up failed sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.344335 containerd[1494]: time="2025-01-13T20:46:12.344317899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.344587 kubelet[2681]: E0113 20:46:12.344557 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.344689 kubelet[2681]: E0113 20:46:12.344619 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:12.344689 kubelet[2681]: E0113 20:46:12.344643 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:12.344762 kubelet[2681]: E0113 20:46:12.344700 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:12.351052 containerd[1494]: time="2025-01-13T20:46:12.351010109Z" level=error msg="Failed to destroy network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.351555 containerd[1494]: time="2025-01-13T20:46:12.351445706Z" level=error msg="encountered an error cleaning up failed sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.351555 containerd[1494]: time="2025-01-13T20:46:12.351534263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.351818 kubelet[2681]: E0113 20:46:12.351776 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:12.352002 kubelet[2681]: E0113 20:46:12.351838 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:12.352002 kubelet[2681]: E0113 20:46:12.351912 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:12.352002 kubelet[2681]: E0113 20:46:12.351977 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:12.532671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538-shm.mount: Deactivated successfully. Jan 13 20:46:12.804969 kubelet[2681]: I0113 20:46:12.804929 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:12.805658 kubelet[2681]: E0113 20:46:12.805625 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.078107 kubelet[2681]: I0113 20:46:13.077980 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2" Jan 13 20:46:13.079187 containerd[1494]: time="2025-01-13T20:46:13.079007890Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:13.079798 containerd[1494]: time="2025-01-13T20:46:13.079255615Z" level=info msg="Ensure that sandbox c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2 in task-service has been cleanup successfully" Jan 13 20:46:13.080926 containerd[1494]: time="2025-01-13T20:46:13.080888001Z" level=info msg="TearDown network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" successfully" Jan 13 20:46:13.080926 containerd[1494]: time="2025-01-13T20:46:13.080913288Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" returns successfully" Jan 13 20:46:13.081328 kubelet[2681]: I0113 20:46:13.081286 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798" Jan 13 20:46:13.081928 containerd[1494]: time="2025-01-13T20:46:13.081890774Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:13.082116 containerd[1494]: time="2025-01-13T20:46:13.082092352Z" level=info msg="Ensure that sandbox 6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798 in task-service has been cleanup successfully" Jan 13 20:46:13.082569 containerd[1494]: time="2025-01-13T20:46:13.082368581Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:13.082569 containerd[1494]: time="2025-01-13T20:46:13.082497003Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:13.082569 containerd[1494]: time="2025-01-13T20:46:13.082511580Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:13.082984 containerd[1494]: time="2025-01-13T20:46:13.082661070Z" level=info msg="TearDown network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" successfully" Jan 13 20:46:13.082984 containerd[1494]: time="2025-01-13T20:46:13.082723397Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" returns successfully" Jan 13 20:46:13.083112 containerd[1494]: time="2025-01-13T20:46:13.083073865Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:13.083217 containerd[1494]: time="2025-01-13T20:46:13.083181779Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:13.083217 containerd[1494]: time="2025-01-13T20:46:13.083210553Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:13.083500 containerd[1494]: time="2025-01-13T20:46:13.083303958Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:13.083500 containerd[1494]: time="2025-01-13T20:46:13.083387665Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:13.083500 containerd[1494]: time="2025-01-13T20:46:13.083403114Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:13.083268 systemd[1]: run-netns-cni\x2dda23d720\x2dfc0e\x2dea9e\x2d33c5\x2d49ae1235a064.mount: Deactivated successfully. Jan 13 20:46:13.084016 containerd[1494]: time="2025-01-13T20:46:13.083904726Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:13.084016 containerd[1494]: time="2025-01-13T20:46:13.084000746Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:13.084016 containerd[1494]: time="2025-01-13T20:46:13.084015314Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:13.084565 containerd[1494]: time="2025-01-13T20:46:13.084128046Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:13.084565 containerd[1494]: time="2025-01-13T20:46:13.084211893Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:13.084565 containerd[1494]: time="2025-01-13T20:46:13.084225449Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:13.084565 containerd[1494]: time="2025-01-13T20:46:13.084379598Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:13.084924 containerd[1494]: time="2025-01-13T20:46:13.084895076Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:13.085023 containerd[1494]: time="2025-01-13T20:46:13.084998550Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:13.085023 containerd[1494]: time="2025-01-13T20:46:13.085018949Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:13.085230 containerd[1494]: time="2025-01-13T20:46:13.085173259Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:13.085230 containerd[1494]: time="2025-01-13T20:46:13.085194429Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:13.085674 containerd[1494]: time="2025-01-13T20:46:13.085537002Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:13.085855 containerd[1494]: time="2025-01-13T20:46:13.085742537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:13.086720 containerd[1494]: time="2025-01-13T20:46:13.086353515Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:13.086720 containerd[1494]: time="2025-01-13T20:46:13.086374755Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:13.086821 kubelet[2681]: E0113 20:46:13.086586 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.086881 containerd[1494]: time="2025-01-13T20:46:13.086807759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:13.087778 kubelet[2681]: I0113 20:46:13.087744 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a" Jan 13 20:46:13.088202 containerd[1494]: time="2025-01-13T20:46:13.088167883Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:13.088573 containerd[1494]: time="2025-01-13T20:46:13.088333624Z" level=info msg="Ensure that sandbox bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a in task-service has been cleanup successfully" Jan 13 20:46:13.090768 containerd[1494]: time="2025-01-13T20:46:13.088899677Z" level=info msg="TearDown network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" successfully" Jan 13 20:46:13.090768 containerd[1494]: time="2025-01-13T20:46:13.088917470Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" returns successfully" Jan 13 20:46:13.088895 systemd[1]: run-netns-cni\x2d4da1fe37\x2d87f6\x2df002\x2d6b98\x2d2fbc2bf064d1.mount: Deactivated successfully. Jan 13 20:46:13.091512 containerd[1494]: time="2025-01-13T20:46:13.091480183Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:13.091730 containerd[1494]: time="2025-01-13T20:46:13.091575121Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:13.091730 containerd[1494]: time="2025-01-13T20:46:13.091589749Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:13.092021 containerd[1494]: time="2025-01-13T20:46:13.091985241Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:13.092143 containerd[1494]: time="2025-01-13T20:46:13.092082834Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:13.092143 containerd[1494]: time="2025-01-13T20:46:13.092102272Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:13.093288 containerd[1494]: time="2025-01-13T20:46:13.093109373Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:13.093288 containerd[1494]: time="2025-01-13T20:46:13.093224178Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:13.093288 containerd[1494]: time="2025-01-13T20:46:13.093239457Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:13.093216 systemd[1]: run-netns-cni\x2d16359450\x2d32d1\x2d7acd\x2d670c\x2dd510fc680263.mount: Deactivated successfully. Jan 13 20:46:13.093985 containerd[1494]: time="2025-01-13T20:46:13.093946414Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:13.094068 containerd[1494]: time="2025-01-13T20:46:13.094039930Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:13.094068 containerd[1494]: time="2025-01-13T20:46:13.094057523Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:13.094708 containerd[1494]: time="2025-01-13T20:46:13.094675233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:13.095524 kubelet[2681]: I0113 20:46:13.095365 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655" Jan 13 20:46:13.095860 containerd[1494]: time="2025-01-13T20:46:13.095816878Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:13.096111 containerd[1494]: time="2025-01-13T20:46:13.096053531Z" level=info msg="Ensure that sandbox f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655 in task-service has been cleanup successfully" Jan 13 20:46:13.097508 containerd[1494]: time="2025-01-13T20:46:13.097443162Z" level=info msg="TearDown network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" successfully" Jan 13 20:46:13.097508 containerd[1494]: time="2025-01-13T20:46:13.097487325Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" returns successfully" Jan 13 20:46:13.097996 containerd[1494]: time="2025-01-13T20:46:13.097967276Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:13.098639 containerd[1494]: time="2025-01-13T20:46:13.098599533Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:13.098639 containerd[1494]: time="2025-01-13T20:46:13.098620633Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:13.099099 containerd[1494]: time="2025-01-13T20:46:13.098919474Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:13.099099 containerd[1494]: time="2025-01-13T20:46:13.099084042Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:13.099099 containerd[1494]: time="2025-01-13T20:46:13.099097798Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:13.099630 containerd[1494]: time="2025-01-13T20:46:13.099587539Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:13.100503 kubelet[2681]: I0113 20:46:13.099749 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8" Jan 13 20:46:13.100154 systemd[1]: run-netns-cni\x2de5ccb196\x2d2c99\x2d0de7\x2de7f5\x2d609c03563276.mount: Deactivated successfully. Jan 13 20:46:13.100654 containerd[1494]: time="2025-01-13T20:46:13.100136951Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:13.100654 containerd[1494]: time="2025-01-13T20:46:13.100382631Z" level=info msg="Ensure that sandbox cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8 in task-service has been cleanup successfully" Jan 13 20:46:13.100654 containerd[1494]: time="2025-01-13T20:46:13.100482199Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:13.100654 containerd[1494]: time="2025-01-13T20:46:13.100500503Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.100778195Z" level=info msg="TearDown network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.100798523Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" returns successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101141557Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101210927Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101225615Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101243358Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101484672Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:13.101548 containerd[1494]: time="2025-01-13T20:46:13.101505471Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:13.102337 containerd[1494]: time="2025-01-13T20:46:13.102012903Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:13.102337 containerd[1494]: time="2025-01-13T20:46:13.102128982Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:13.102337 containerd[1494]: time="2025-01-13T20:46:13.102144190Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:13.102337 containerd[1494]: time="2025-01-13T20:46:13.102262692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:13.102492 containerd[1494]: time="2025-01-13T20:46:13.102398998Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:13.102674 containerd[1494]: time="2025-01-13T20:46:13.102578395Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:13.102743 containerd[1494]: time="2025-01-13T20:46:13.102666861Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:13.102966 containerd[1494]: time="2025-01-13T20:46:13.102905830Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:13.103057 containerd[1494]: time="2025-01-13T20:46:13.103004455Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:13.103057 containerd[1494]: time="2025-01-13T20:46:13.103023180Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:13.103410 containerd[1494]: time="2025-01-13T20:46:13.103365684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:13.103932 kubelet[2681]: I0113 20:46:13.103869 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538" Jan 13 20:46:13.104816 kubelet[2681]: E0113 20:46:13.104700 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.104878 containerd[1494]: time="2025-01-13T20:46:13.104702414Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:13.105064 containerd[1494]: time="2025-01-13T20:46:13.104966220Z" level=info msg="Ensure that sandbox 9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538 in task-service has been cleanup successfully" Jan 13 20:46:13.105520 containerd[1494]: time="2025-01-13T20:46:13.105294266Z" level=info msg="TearDown network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" successfully" Jan 13 20:46:13.105520 containerd[1494]: time="2025-01-13T20:46:13.105313722Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" returns successfully" Jan 13 20:46:13.105824 containerd[1494]: time="2025-01-13T20:46:13.105610870Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:13.105824 containerd[1494]: time="2025-01-13T20:46:13.105711749Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:13.105824 containerd[1494]: time="2025-01-13T20:46:13.105726297Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:13.106475 containerd[1494]: time="2025-01-13T20:46:13.106428637Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:13.106653 containerd[1494]: time="2025-01-13T20:46:13.106630565Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:13.106653 containerd[1494]: time="2025-01-13T20:46:13.106650372Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:13.106969 containerd[1494]: time="2025-01-13T20:46:13.106909419Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:13.107144 containerd[1494]: time="2025-01-13T20:46:13.107054902Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:13.107144 containerd[1494]: time="2025-01-13T20:46:13.107074590Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:13.107372 containerd[1494]: time="2025-01-13T20:46:13.107345418Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:13.107490 containerd[1494]: time="2025-01-13T20:46:13.107436650Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:13.107534 containerd[1494]: time="2025-01-13T20:46:13.107485080Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:13.107678 kubelet[2681]: E0113 20:46:13.107655 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.107919 containerd[1494]: time="2025-01-13T20:46:13.107890261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:13.525956 systemd[1]: run-netns-cni\x2d8784b6ae\x2db77f\x2db051\x2d3c13\x2dd08c77dcaaaa.mount: Deactivated successfully. Jan 13 20:46:13.526431 systemd[1]: run-netns-cni\x2d3b5dbb8b\x2d89d3\x2d270a\x2de873\x2dbc7b57e620f0.mount: Deactivated successfully. Jan 13 20:46:14.307661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273352976.mount: Deactivated successfully. Jan 13 20:46:14.460188 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:53870.service - OpenSSH per-connection server daemon (10.0.0.1:53870). Jan 13 20:46:14.504224 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 53870 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:14.506274 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:14.510898 systemd-logind[1485]: New session 10 of user core. Jan 13 20:46:14.517588 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:46:14.780230 sshd[4616]: Connection closed by 10.0.0.1 port 53870 Jan 13 20:46:14.780656 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:14.786220 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:53870.service: Deactivated successfully. Jan 13 20:46:14.788865 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:46:14.789830 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:46:14.790941 systemd-logind[1485]: Removed session 10. Jan 13 20:46:14.958480 containerd[1494]: time="2025-01-13T20:46:14.958409841Z" level=error msg="Failed to destroy network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:14.959762 containerd[1494]: time="2025-01-13T20:46:14.959709692Z" level=error msg="encountered an error cleaning up failed sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:14.959762 containerd[1494]: time="2025-01-13T20:46:14.959766028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:14.960251 kubelet[2681]: E0113 20:46:14.960217 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:14.960739 kubelet[2681]: E0113 20:46:14.960280 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:14.960739 kubelet[2681]: E0113 20:46:14.960304 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:14.960739 kubelet[2681]: E0113 20:46:14.960355 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:14.962902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971-shm.mount: Deactivated successfully. Jan 13 20:46:14.976485 containerd[1494]: time="2025-01-13T20:46:14.974621824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:14.991524 containerd[1494]: time="2025-01-13T20:46:14.989979733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:46:14.991524 containerd[1494]: time="2025-01-13T20:46:14.990119575Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.019543 containerd[1494]: time="2025-01-13T20:46:15.019479415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.021022 containerd[1494]: time="2025-01-13T20:46:15.020981145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.698815499s" Jan 13 20:46:15.021258 containerd[1494]: time="2025-01-13T20:46:15.021131887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:46:15.047499 containerd[1494]: time="2025-01-13T20:46:15.047341640Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:46:15.069937 containerd[1494]: time="2025-01-13T20:46:15.069521826Z" level=error msg="Failed to destroy network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.070058 containerd[1494]: time="2025-01-13T20:46:15.069989294Z" level=error msg="encountered an error cleaning up failed sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.070058 containerd[1494]: time="2025-01-13T20:46:15.070046472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.070311 kubelet[2681]: E0113 20:46:15.070283 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.070367 kubelet[2681]: E0113 20:46:15.070339 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:15.070367 kubelet[2681]: E0113 20:46:15.070361 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:15.070435 kubelet[2681]: E0113 20:46:15.070412 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:15.075968 containerd[1494]: time="2025-01-13T20:46:15.075909270Z" level=error msg="Failed to destroy network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.076746 containerd[1494]: time="2025-01-13T20:46:15.076719000Z" level=error msg="encountered an error cleaning up failed sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.076811 containerd[1494]: time="2025-01-13T20:46:15.076784202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.077077 kubelet[2681]: E0113 20:46:15.077048 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.077205 kubelet[2681]: E0113 20:46:15.077108 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:15.077205 kubelet[2681]: E0113 20:46:15.077130 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:15.077205 kubelet[2681]: E0113 20:46:15.077181 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:15.082283 containerd[1494]: time="2025-01-13T20:46:15.081446867Z" level=error msg="Failed to destroy network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.082283 containerd[1494]: time="2025-01-13T20:46:15.081832492Z" level=error msg="encountered an error cleaning up failed sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.082283 containerd[1494]: time="2025-01-13T20:46:15.081881413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.082447 kubelet[2681]: E0113 20:46:15.082086 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.082447 kubelet[2681]: E0113 20:46:15.082127 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:15.082447 kubelet[2681]: E0113 20:46:15.082146 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:15.082545 kubelet[2681]: E0113 20:46:15.082193 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:15.084265 containerd[1494]: time="2025-01-13T20:46:15.084240312Z" level=error msg="Failed to destroy network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.084648 containerd[1494]: time="2025-01-13T20:46:15.084625656Z" level=error msg="encountered an error cleaning up failed sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.084739 containerd[1494]: time="2025-01-13T20:46:15.084721545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.085064 kubelet[2681]: E0113 20:46:15.085022 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.085108 kubelet[2681]: E0113 20:46:15.085091 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:15.085140 kubelet[2681]: E0113 20:46:15.085120 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:15.085189 kubelet[2681]: E0113 20:46:15.085171 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:15.087156 containerd[1494]: time="2025-01-13T20:46:15.087123917Z" level=error msg="Failed to destroy network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.087526 containerd[1494]: time="2025-01-13T20:46:15.087436434Z" level=error msg="encountered an error cleaning up failed sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.087556 containerd[1494]: time="2025-01-13T20:46:15.087538916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.087710 kubelet[2681]: E0113 20:46:15.087687 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.087746 kubelet[2681]: E0113 20:46:15.087726 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:15.087746 kubelet[2681]: E0113 20:46:15.087743 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:15.087810 kubelet[2681]: E0113 20:46:15.087783 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:15.111481 kubelet[2681]: I0113 20:46:15.111428 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5" Jan 13 20:46:15.112020 containerd[1494]: time="2025-01-13T20:46:15.111991840Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" Jan 13 20:46:15.112202 containerd[1494]: time="2025-01-13T20:46:15.112177308Z" level=info msg="Ensure that sandbox 8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5 in task-service has been cleanup successfully" Jan 13 20:46:15.112361 containerd[1494]: time="2025-01-13T20:46:15.112341596Z" level=info msg="TearDown network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" successfully" Jan 13 20:46:15.112361 containerd[1494]: time="2025-01-13T20:46:15.112357215Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" returns successfully" Jan 13 20:46:15.112764 containerd[1494]: time="2025-01-13T20:46:15.112739815Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:15.112849 containerd[1494]: time="2025-01-13T20:46:15.112822590Z" level=info msg="TearDown network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" successfully" Jan 13 20:46:15.112849 containerd[1494]: time="2025-01-13T20:46:15.112832388Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" returns successfully" Jan 13 20:46:15.113160 containerd[1494]: time="2025-01-13T20:46:15.113129797Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:15.113215 containerd[1494]: time="2025-01-13T20:46:15.113199317Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:15.113215 containerd[1494]: time="2025-01-13T20:46:15.113210638Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:15.113476 containerd[1494]: time="2025-01-13T20:46:15.113428667Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:15.113844 containerd[1494]: time="2025-01-13T20:46:15.113550517Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:15.113844 containerd[1494]: time="2025-01-13T20:46:15.113567959Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:15.113844 containerd[1494]: time="2025-01-13T20:46:15.113778123Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:15.113934 kubelet[2681]: I0113 20:46:15.113609 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971" Jan 13 20:46:15.113974 containerd[1494]: time="2025-01-13T20:46:15.113861750Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:15.113974 containerd[1494]: time="2025-01-13T20:46:15.113872701Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:15.113974 containerd[1494]: time="2025-01-13T20:46:15.113923257Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" Jan 13 20:46:15.114120 containerd[1494]: time="2025-01-13T20:46:15.114094017Z" level=info msg="Ensure that sandbox 72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971 in task-service has been cleanup successfully" Jan 13 20:46:15.114321 containerd[1494]: time="2025-01-13T20:46:15.114291688Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:15.114381 containerd[1494]: time="2025-01-13T20:46:15.114371338Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:15.114422 containerd[1494]: time="2025-01-13T20:46:15.114380785Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:15.114618 containerd[1494]: time="2025-01-13T20:46:15.114478859Z" level=info msg="TearDown network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" successfully" Jan 13 20:46:15.114618 containerd[1494]: time="2025-01-13T20:46:15.114492906Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" returns successfully" Jan 13 20:46:15.114952 containerd[1494]: time="2025-01-13T20:46:15.114845037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:46:15.115321 containerd[1494]: time="2025-01-13T20:46:15.115153246Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:15.115321 containerd[1494]: time="2025-01-13T20:46:15.115266368Z" level=info msg="TearDown network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" successfully" Jan 13 20:46:15.115321 containerd[1494]: time="2025-01-13T20:46:15.115281807Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" returns successfully" Jan 13 20:46:15.115751 containerd[1494]: time="2025-01-13T20:46:15.115627135Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:15.115751 containerd[1494]: time="2025-01-13T20:46:15.115698920Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:15.115751 containerd[1494]: time="2025-01-13T20:46:15.115708207Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:15.115935 kubelet[2681]: I0113 20:46:15.115888 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05" Jan 13 20:46:15.116019 containerd[1494]: time="2025-01-13T20:46:15.115999525Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:15.116084 containerd[1494]: time="2025-01-13T20:46:15.116070197Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:15.116124 containerd[1494]: time="2025-01-13T20:46:15.116082290Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:15.116496 containerd[1494]: time="2025-01-13T20:46:15.116469146Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" Jan 13 20:46:15.116967 containerd[1494]: time="2025-01-13T20:46:15.116511036Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:15.116967 containerd[1494]: time="2025-01-13T20:46:15.116785921Z" level=info msg="Ensure that sandbox bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05 in task-service has been cleanup successfully" Jan 13 20:46:15.116967 containerd[1494]: time="2025-01-13T20:46:15.116904454Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:15.116967 containerd[1494]: time="2025-01-13T20:46:15.116937105Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:15.117191 containerd[1494]: time="2025-01-13T20:46:15.117165253Z" level=info msg="TearDown network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" successfully" Jan 13 20:46:15.117191 containerd[1494]: time="2025-01-13T20:46:15.117188176Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" returns successfully" Jan 13 20:46:15.117663 containerd[1494]: time="2025-01-13T20:46:15.117605390Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:15.117715 containerd[1494]: time="2025-01-13T20:46:15.117657328Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:15.117715 containerd[1494]: time="2025-01-13T20:46:15.117699948Z" level=info msg="TearDown network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" successfully" Jan 13 20:46:15.117715 containerd[1494]: time="2025-01-13T20:46:15.117712441Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" returns successfully" Jan 13 20:46:15.117896 containerd[1494]: time="2025-01-13T20:46:15.117746786Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:15.117896 containerd[1494]: time="2025-01-13T20:46:15.117763417Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:15.118005 kubelet[2681]: E0113 20:46:15.117981 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:15.118086 containerd[1494]: time="2025-01-13T20:46:15.118066326Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:15.118158 containerd[1494]: time="2025-01-13T20:46:15.118140395Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:15.118158 containerd[1494]: time="2025-01-13T20:46:15.118154050Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:15.118325 containerd[1494]: time="2025-01-13T20:46:15.118290356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:6,}" Jan 13 20:46:15.118523 containerd[1494]: time="2025-01-13T20:46:15.118498367Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:15.118666 containerd[1494]: time="2025-01-13T20:46:15.118648929Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:15.118666 containerd[1494]: time="2025-01-13T20:46:15.118661042Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:15.119067 containerd[1494]: time="2025-01-13T20:46:15.118982085Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:15.119126 containerd[1494]: time="2025-01-13T20:46:15.119074859Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:15.119126 containerd[1494]: time="2025-01-13T20:46:15.119091270Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:15.119425 containerd[1494]: time="2025-01-13T20:46:15.119399018Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:15.119543 containerd[1494]: time="2025-01-13T20:46:15.119521006Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:15.119543 containerd[1494]: time="2025-01-13T20:46:15.119537798Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:15.119622 kubelet[2681]: I0113 20:46:15.119595 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152" Jan 13 20:46:15.120166 containerd[1494]: time="2025-01-13T20:46:15.120039801Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" Jan 13 20:46:15.120166 containerd[1494]: time="2025-01-13T20:46:15.120135761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:6,}" Jan 13 20:46:15.120249 containerd[1494]: time="2025-01-13T20:46:15.120220660Z" level=info msg="Ensure that sandbox e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152 in task-service has been cleanup successfully" Jan 13 20:46:15.120443 containerd[1494]: time="2025-01-13T20:46:15.120421668Z" level=info msg="TearDown network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" successfully" Jan 13 20:46:15.120443 containerd[1494]: time="2025-01-13T20:46:15.120441465Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" returns successfully" Jan 13 20:46:15.120855 containerd[1494]: time="2025-01-13T20:46:15.120826047Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:15.120950 containerd[1494]: time="2025-01-13T20:46:15.120913772Z" level=info msg="TearDown network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" successfully" Jan 13 20:46:15.120950 containerd[1494]: time="2025-01-13T20:46:15.120936534Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" returns successfully" Jan 13 20:46:15.121384 containerd[1494]: time="2025-01-13T20:46:15.121360410Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:15.121544 containerd[1494]: time="2025-01-13T20:46:15.121525851Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:15.121544 containerd[1494]: time="2025-01-13T20:46:15.121540589Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:15.121946 containerd[1494]: time="2025-01-13T20:46:15.121849649Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:15.121946 containerd[1494]: time="2025-01-13T20:46:15.121934659Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:15.121946 containerd[1494]: time="2025-01-13T20:46:15.121944487Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:15.122219 containerd[1494]: time="2025-01-13T20:46:15.122190530Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:15.122304 containerd[1494]: time="2025-01-13T20:46:15.122274818Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:15.122304 containerd[1494]: time="2025-01-13T20:46:15.122291599Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:15.122555 kubelet[2681]: I0113 20:46:15.122515 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412" Jan 13 20:46:15.122621 containerd[1494]: time="2025-01-13T20:46:15.122519887Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:15.122621 containerd[1494]: time="2025-01-13T20:46:15.122594688Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:15.122621 containerd[1494]: time="2025-01-13T20:46:15.122603785Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:15.122896 containerd[1494]: time="2025-01-13T20:46:15.122859956Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" Jan 13 20:46:15.122999 kubelet[2681]: E0113 20:46:15.122879 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:15.123071 containerd[1494]: time="2025-01-13T20:46:15.123015278Z" level=info msg="Ensure that sandbox 922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412 in task-service has been cleanup successfully" Jan 13 20:46:15.123130 containerd[1494]: time="2025-01-13T20:46:15.123095528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:6,}" Jan 13 20:46:15.123241 containerd[1494]: time="2025-01-13T20:46:15.123160190Z" level=info msg="TearDown network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" successfully" Jan 13 20:46:15.123241 containerd[1494]: time="2025-01-13T20:46:15.123170870Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" returns successfully" Jan 13 20:46:15.123802 containerd[1494]: time="2025-01-13T20:46:15.123586290Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:15.123802 containerd[1494]: time="2025-01-13T20:46:15.123686047Z" level=info msg="TearDown network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" successfully" Jan 13 20:46:15.123802 containerd[1494]: time="2025-01-13T20:46:15.123701536Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" returns successfully" Jan 13 20:46:15.123984 containerd[1494]: time="2025-01-13T20:46:15.123930135Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:15.124019 containerd[1494]: time="2025-01-13T20:46:15.124008953Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:15.124050 containerd[1494]: time="2025-01-13T20:46:15.124018190Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:15.124388 containerd[1494]: time="2025-01-13T20:46:15.124355855Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:15.124476 containerd[1494]: time="2025-01-13T20:46:15.124443799Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:15.124476 containerd[1494]: time="2025-01-13T20:46:15.124472294Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:15.124767 containerd[1494]: time="2025-01-13T20:46:15.124676617Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:15.124767 containerd[1494]: time="2025-01-13T20:46:15.124758270Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:15.124767 containerd[1494]: time="2025-01-13T20:46:15.124767197Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:15.125272 containerd[1494]: time="2025-01-13T20:46:15.125105222Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:15.125272 containerd[1494]: time="2025-01-13T20:46:15.125191624Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:15.125272 containerd[1494]: time="2025-01-13T20:46:15.125200560Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:15.125432 kubelet[2681]: I0113 20:46:15.125348 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a" Jan 13 20:46:15.125810 containerd[1494]: time="2025-01-13T20:46:15.125780700Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" Jan 13 20:46:15.126006 containerd[1494]: time="2025-01-13T20:46:15.125976918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:6,}" Jan 13 20:46:15.126112 containerd[1494]: time="2025-01-13T20:46:15.125984582Z" level=info msg="Ensure that sandbox 9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a in task-service has been cleanup successfully" Jan 13 20:46:15.126249 containerd[1494]: time="2025-01-13T20:46:15.126231476Z" level=info msg="TearDown network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" successfully" Jan 13 20:46:15.126278 containerd[1494]: time="2025-01-13T20:46:15.126246254Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" returns successfully" Jan 13 20:46:15.126523 containerd[1494]: time="2025-01-13T20:46:15.126500151Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:15.126604 containerd[1494]: time="2025-01-13T20:46:15.126570523Z" level=info msg="TearDown network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" successfully" Jan 13 20:46:15.126604 containerd[1494]: time="2025-01-13T20:46:15.126578408Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" returns successfully" Jan 13 20:46:15.126837 containerd[1494]: time="2025-01-13T20:46:15.126809823Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:15.126924 containerd[1494]: time="2025-01-13T20:46:15.126902677Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:15.126924 containerd[1494]: time="2025-01-13T20:46:15.126921472Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:15.127323 containerd[1494]: time="2025-01-13T20:46:15.127179617Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:15.127323 containerd[1494]: time="2025-01-13T20:46:15.127263434Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:15.127323 containerd[1494]: time="2025-01-13T20:46:15.127272712Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:15.127616 containerd[1494]: time="2025-01-13T20:46:15.127532590Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:15.127616 containerd[1494]: time="2025-01-13T20:46:15.127607981Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:15.127616 containerd[1494]: time="2025-01-13T20:46:15.127617189Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:15.127882 containerd[1494]: time="2025-01-13T20:46:15.127858763Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:15.127977 containerd[1494]: time="2025-01-13T20:46:15.127958219Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:15.128034 containerd[1494]: time="2025-01-13T20:46:15.127977675Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:15.128392 containerd[1494]: time="2025-01-13T20:46:15.128360865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:6,}" Jan 13 20:46:15.253979 containerd[1494]: time="2025-01-13T20:46:15.253918617Z" level=info msg="CreateContainer within sandbox \"6b0edbeb4264c7c768fa31ea1595763a4ddc1b3e910eb47f59890f934f66bd50\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e34a9c026bd4942163cf149e80f02ac338ca5558d388572eed05fc83effb47cb\"" Jan 13 20:46:15.254512 containerd[1494]: time="2025-01-13T20:46:15.254477185Z" level=info msg="StartContainer for \"e34a9c026bd4942163cf149e80f02ac338ca5558d388572eed05fc83effb47cb\"" Jan 13 20:46:15.392834 containerd[1494]: time="2025-01-13T20:46:15.392638813Z" level=error msg="Failed to destroy network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.393645 containerd[1494]: time="2025-01-13T20:46:15.393436071Z" level=error msg="encountered an error cleaning up failed sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.393645 containerd[1494]: time="2025-01-13T20:46:15.393548552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.394336 kubelet[2681]: E0113 20:46:15.394033 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.394336 kubelet[2681]: E0113 20:46:15.394115 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:15.394336 kubelet[2681]: E0113 20:46:15.394137 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n9xm5" Jan 13 20:46:15.394535 kubelet[2681]: E0113 20:46:15.394211 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n9xm5_calico-system(39e13210-d183-473d-999b-c81aa9bc8ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n9xm5" podUID="39e13210-d183-473d-999b-c81aa9bc8ccf" Jan 13 20:46:15.398517 containerd[1494]: time="2025-01-13T20:46:15.398085651Z" level=error msg="Failed to destroy network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.398737 containerd[1494]: time="2025-01-13T20:46:15.398707498Z" level=error msg="encountered an error cleaning up failed sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.400011 containerd[1494]: time="2025-01-13T20:46:15.399978645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.400290 kubelet[2681]: E0113 20:46:15.400265 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.400576 kubelet[2681]: E0113 20:46:15.400544 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:15.400694 kubelet[2681]: E0113 20:46:15.400678 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-sj5ll" Jan 13 20:46:15.401026 kubelet[2681]: E0113 20:46:15.401009 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-sj5ll_kube-system(ea7e48ee-74c8-4c04-8866-2bd72cdc56d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-sj5ll" podUID="ea7e48ee-74c8-4c04-8866-2bd72cdc56d3" Jan 13 20:46:15.402813 containerd[1494]: time="2025-01-13T20:46:15.402752924Z" level=error msg="Failed to destroy network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.403285 containerd[1494]: time="2025-01-13T20:46:15.403261740Z" level=error msg="encountered an error cleaning up failed sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.403389 containerd[1494]: time="2025-01-13T20:46:15.403369993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.403709 kubelet[2681]: E0113 20:46:15.403686 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.403764 kubelet[2681]: E0113 20:46:15.403734 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:15.403764 kubelet[2681]: E0113 20:46:15.403758 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zjvgd" Jan 13 20:46:15.403850 kubelet[2681]: E0113 20:46:15.403827 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zjvgd_kube-system(82db675e-45a2-40cb-aaa5-0e3781350d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zjvgd" podUID="82db675e-45a2-40cb-aaa5-0e3781350d23" Jan 13 20:46:15.409321 containerd[1494]: time="2025-01-13T20:46:15.409256495Z" level=error msg="Failed to destroy network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.409521 containerd[1494]: time="2025-01-13T20:46:15.409349701Z" level=error msg="Failed to destroy network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.409802 containerd[1494]: time="2025-01-13T20:46:15.409757526Z" level=error msg="encountered an error cleaning up failed sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.409849 containerd[1494]: time="2025-01-13T20:46:15.409792792Z" level=error msg="encountered an error cleaning up failed sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.409849 containerd[1494]: time="2025-01-13T20:46:15.409821035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.409849 containerd[1494]: time="2025-01-13T20:46:15.409836645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.410075 kubelet[2681]: E0113 20:46:15.410042 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.410127 kubelet[2681]: E0113 20:46:15.410087 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:15.410127 kubelet[2681]: E0113 20:46:15.410107 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" Jan 13 20:46:15.410189 kubelet[2681]: E0113 20:46:15.410159 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-z54wj_calico-apiserver(164635ec-fca2-4958-bf9f-f8a81545fa24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podUID="164635ec-fca2-4958-bf9f-f8a81545fa24" Jan 13 20:46:15.410313 kubelet[2681]: E0113 20:46:15.410276 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.410547 kubelet[2681]: E0113 20:46:15.410323 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:15.410547 kubelet[2681]: E0113 20:46:15.410346 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" Jan 13 20:46:15.410547 kubelet[2681]: E0113 20:46:15.410397 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995746cb4-vtxf5_calico-system(20a94580-01ed-434d-8ae3-3bc7fd6089f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podUID="20a94580-01ed-434d-8ae3-3bc7fd6089f3" Jan 13 20:46:15.575523 containerd[1494]: time="2025-01-13T20:46:15.575438024Z" level=error msg="Failed to destroy network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.576055 containerd[1494]: time="2025-01-13T20:46:15.575994148Z" level=error msg="encountered an error cleaning up failed sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.576055 containerd[1494]: time="2025-01-13T20:46:15.576054922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.576371 kubelet[2681]: E0113 20:46:15.576329 2681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.576434 kubelet[2681]: E0113 20:46:15.576406 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:15.576434 kubelet[2681]: E0113 20:46:15.576428 2681 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" Jan 13 20:46:15.576549 kubelet[2681]: E0113 20:46:15.576529 2681 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd56cf7c-gf2sc_calico-apiserver(005c7b7f-f680-4342-abb9-808a0c23c33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podUID="005c7b7f-f680-4342-abb9-808a0c23c33a" Jan 13 20:46:15.603607 systemd[1]: Started cri-containerd-e34a9c026bd4942163cf149e80f02ac338ca5558d388572eed05fc83effb47cb.scope - libcontainer container e34a9c026bd4942163cf149e80f02ac338ca5558d388572eed05fc83effb47cb. Jan 13 20:46:15.687401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5-shm.mount: Deactivated successfully. Jan 13 20:46:15.689356 systemd[1]: run-netns-cni\x2dfddeec66\x2de8b4\x2d921b\x2d8b11\x2d196fc01c0290.mount: Deactivated successfully. Jan 13 20:46:15.689444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05-shm.mount: Deactivated successfully. Jan 13 20:46:15.689916 systemd[1]: run-netns-cni\x2d3231ca8f\x2d820a\x2de23a\x2d87e0\x2d41911fc50136.mount: Deactivated successfully. Jan 13 20:46:15.690010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152-shm.mount: Deactivated successfully. Jan 13 20:46:15.690091 systemd[1]: run-netns-cni\x2d2195785e\x2d91a2\x2de72d\x2d1ea8\x2d7625ddea34bc.mount: Deactivated successfully. Jan 13 20:46:15.690166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412-shm.mount: Deactivated successfully. Jan 13 20:46:15.690249 systemd[1]: run-netns-cni\x2d6a71e3c2\x2dace8\x2de0dc\x2dbe9a\x2d6bcead29a660.mount: Deactivated successfully. Jan 13 20:46:15.700888 containerd[1494]: time="2025-01-13T20:46:15.700749624Z" level=info msg="StartContainer for \"e34a9c026bd4942163cf149e80f02ac338ca5558d388572eed05fc83effb47cb\" returns successfully" Jan 13 20:46:15.720884 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:46:15.721095 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:46:16.129857 kubelet[2681]: I0113 20:46:16.129671 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4" Jan 13 20:46:16.130358 containerd[1494]: time="2025-01-13T20:46:16.130332233Z" level=info msg="StopPodSandbox for \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\"" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.130710443Z" level=info msg="Ensure that sandbox 4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4 in task-service has been cleanup successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.130937901Z" level=info msg="TearDown network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.130953129Z" level=info msg="StopPodSandbox for \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" returns successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131163133Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131233635Z" level=info msg="TearDown network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131242503Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" returns successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131561461Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131657803Z" level=info msg="TearDown network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131670346Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" returns successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.131944470Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.132036263Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:16.132125 containerd[1494]: time="2025-01-13T20:46:16.132050169Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:16.133384 containerd[1494]: time="2025-01-13T20:46:16.133102885Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:16.133647 containerd[1494]: time="2025-01-13T20:46:16.133583587Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:16.133647 containerd[1494]: time="2025-01-13T20:46:16.133604276Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:16.133979 containerd[1494]: time="2025-01-13T20:46:16.133884332Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:16.134021 containerd[1494]: time="2025-01-13T20:46:16.133971195Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:16.134021 containerd[1494]: time="2025-01-13T20:46:16.134014186Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:16.134578 containerd[1494]: time="2025-01-13T20:46:16.134550954Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:16.134652 containerd[1494]: time="2025-01-13T20:46:16.134631325Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:16.134652 containerd[1494]: time="2025-01-13T20:46:16.134646073Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:16.134662 systemd[1]: run-netns-cni\x2de9222895\x2d62f9\x2d65a8\x2da1c6\x2d33cb3b22299d.mount: Deactivated successfully. Jan 13 20:46:16.135352 containerd[1494]: time="2025-01-13T20:46:16.135005087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:7,}" Jan 13 20:46:16.135444 kubelet[2681]: I0113 20:46:16.135035 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52" Jan 13 20:46:16.135647 containerd[1494]: time="2025-01-13T20:46:16.135608699Z" level=info msg="StopPodSandbox for \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\"" Jan 13 20:46:16.136132 containerd[1494]: time="2025-01-13T20:46:16.135931385Z" level=info msg="Ensure that sandbox b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52 in task-service has been cleanup successfully" Jan 13 20:46:16.136263 containerd[1494]: time="2025-01-13T20:46:16.136242289Z" level=info msg="TearDown network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" successfully" Jan 13 20:46:16.136380 containerd[1494]: time="2025-01-13T20:46:16.136324043Z" level=info msg="StopPodSandbox for \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" returns successfully" Jan 13 20:46:16.138013 containerd[1494]: time="2025-01-13T20:46:16.137981665Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" Jan 13 20:46:16.138100 containerd[1494]: time="2025-01-13T20:46:16.138081231Z" level=info msg="TearDown network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" successfully" Jan 13 20:46:16.138100 containerd[1494]: time="2025-01-13T20:46:16.138096610Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" returns successfully" Jan 13 20:46:16.138754 containerd[1494]: time="2025-01-13T20:46:16.138592412Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:16.138754 containerd[1494]: time="2025-01-13T20:46:16.138677982Z" level=info msg="TearDown network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" successfully" Jan 13 20:46:16.138754 containerd[1494]: time="2025-01-13T20:46:16.138687570Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" returns successfully" Jan 13 20:46:16.139811 containerd[1494]: time="2025-01-13T20:46:16.139753622Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:16.140171 containerd[1494]: time="2025-01-13T20:46:16.140115120Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:16.140171 containerd[1494]: time="2025-01-13T20:46:16.140168661Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:16.140195 systemd[1]: run-netns-cni\x2d79149cd6\x2d1fc5\x2dfd3d\x2dd502\x2dc6b4494c3a18.mount: Deactivated successfully. Jan 13 20:46:16.140747 kubelet[2681]: I0113 20:46:16.140723 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6" Jan 13 20:46:16.141301 containerd[1494]: time="2025-01-13T20:46:16.141171414Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:16.141301 containerd[1494]: time="2025-01-13T20:46:16.141174840Z" level=info msg="StopPodSandbox for \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\"" Jan 13 20:46:16.141392 containerd[1494]: time="2025-01-13T20:46:16.141270519Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:16.141392 containerd[1494]: time="2025-01-13T20:46:16.141380436Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:16.141477 containerd[1494]: time="2025-01-13T20:46:16.141427334Z" level=info msg="Ensure that sandbox 2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6 in task-service has been cleanup successfully" Jan 13 20:46:16.141757 containerd[1494]: time="2025-01-13T20:46:16.141729722Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:16.141850 containerd[1494]: time="2025-01-13T20:46:16.141734191Z" level=info msg="TearDown network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" successfully" Jan 13 20:46:16.141850 containerd[1494]: time="2025-01-13T20:46:16.141847503Z" level=info msg="StopPodSandbox for \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" returns successfully" Jan 13 20:46:16.141924 containerd[1494]: time="2025-01-13T20:46:16.141905101Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:16.141952 containerd[1494]: time="2025-01-13T20:46:16.141923005Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:16.142173 containerd[1494]: time="2025-01-13T20:46:16.142139722Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" Jan 13 20:46:16.142541 containerd[1494]: time="2025-01-13T20:46:16.142235902Z" level=info msg="TearDown network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" successfully" Jan 13 20:46:16.142541 containerd[1494]: time="2025-01-13T20:46:16.142255149Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" returns successfully" Jan 13 20:46:16.142541 containerd[1494]: time="2025-01-13T20:46:16.142385543Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:16.142636 containerd[1494]: time="2025-01-13T20:46:16.142615475Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:16.142740 containerd[1494]: time="2025-01-13T20:46:16.142710133Z" level=info msg="TearDown network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" successfully" Jan 13 20:46:16.142795 containerd[1494]: time="2025-01-13T20:46:16.142733727Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" returns successfully" Jan 13 20:46:16.143083 containerd[1494]: time="2025-01-13T20:46:16.143059208Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:16.143168 containerd[1494]: time="2025-01-13T20:46:16.143143246Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:16.143980 containerd[1494]: time="2025-01-13T20:46:16.143956683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:7,}" Jan 13 20:46:16.144217 containerd[1494]: time="2025-01-13T20:46:16.143981469Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:16.144564 containerd[1494]: time="2025-01-13T20:46:16.144435211Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:16.144609 containerd[1494]: time="2025-01-13T20:46:16.144560757Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:16.144999 containerd[1494]: time="2025-01-13T20:46:16.144972861Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:16.145084 containerd[1494]: time="2025-01-13T20:46:16.145070514Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:16.145119 containerd[1494]: time="2025-01-13T20:46:16.145082837Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:16.145565 containerd[1494]: time="2025-01-13T20:46:16.145398691Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:16.145565 containerd[1494]: time="2025-01-13T20:46:16.145498798Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:16.145565 containerd[1494]: time="2025-01-13T20:46:16.145509017Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:16.145930 containerd[1494]: time="2025-01-13T20:46:16.145881858Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:16.146007 containerd[1494]: time="2025-01-13T20:46:16.145978719Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:16.146007 containerd[1494]: time="2025-01-13T20:46:16.145997124Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:16.146720 kubelet[2681]: I0113 20:46:16.146612 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7" Jan 13 20:46:16.146804 containerd[1494]: time="2025-01-13T20:46:16.146588004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:7,}" Jan 13 20:46:16.147223 containerd[1494]: time="2025-01-13T20:46:16.147200433Z" level=info msg="StopPodSandbox for \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\"" Jan 13 20:46:16.147387 containerd[1494]: time="2025-01-13T20:46:16.147365944Z" level=info msg="Ensure that sandbox f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7 in task-service has been cleanup successfully" Jan 13 20:46:16.147604 systemd[1]: run-netns-cni\x2d32249a1b\x2dcd72\x2d8071\x2d6bcc\x2d4866394a14e9.mount: Deactivated successfully. Jan 13 20:46:16.147761 containerd[1494]: time="2025-01-13T20:46:16.147738853Z" level=info msg="TearDown network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" successfully" Jan 13 20:46:16.147761 containerd[1494]: time="2025-01-13T20:46:16.147761215Z" level=info msg="StopPodSandbox for \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" returns successfully" Jan 13 20:46:16.148567 containerd[1494]: time="2025-01-13T20:46:16.148321718Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" Jan 13 20:46:16.148567 containerd[1494]: time="2025-01-13T20:46:16.148420984Z" level=info msg="TearDown network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" successfully" Jan 13 20:46:16.148567 containerd[1494]: time="2025-01-13T20:46:16.148433999Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" returns successfully" Jan 13 20:46:16.149060 containerd[1494]: time="2025-01-13T20:46:16.148924920Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:16.149060 containerd[1494]: time="2025-01-13T20:46:16.149015922Z" level=info msg="TearDown network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" successfully" Jan 13 20:46:16.149060 containerd[1494]: time="2025-01-13T20:46:16.149026111Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" returns successfully" Jan 13 20:46:16.149403 containerd[1494]: time="2025-01-13T20:46:16.149380125Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:16.149704 kubelet[2681]: I0113 20:46:16.149677 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d" Jan 13 20:46:16.149981 containerd[1494]: time="2025-01-13T20:46:16.149829750Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:16.149981 containerd[1494]: time="2025-01-13T20:46:16.149849327Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:16.150205 containerd[1494]: time="2025-01-13T20:46:16.150182932Z" level=info msg="StopPodSandbox for \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\"" Jan 13 20:46:16.150560 containerd[1494]: time="2025-01-13T20:46:16.150204183Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:16.150560 containerd[1494]: time="2025-01-13T20:46:16.150430768Z" level=info msg="Ensure that sandbox acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d in task-service has been cleanup successfully" Jan 13 20:46:16.150560 containerd[1494]: time="2025-01-13T20:46:16.150443932Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:16.150560 containerd[1494]: time="2025-01-13T20:46:16.150481232Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:16.150712 containerd[1494]: time="2025-01-13T20:46:16.150689734Z" level=info msg="TearDown network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" successfully" Jan 13 20:46:16.150712 containerd[1494]: time="2025-01-13T20:46:16.150707307Z" level=info msg="StopPodSandbox for \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" returns successfully" Jan 13 20:46:16.151078 containerd[1494]: time="2025-01-13T20:46:16.150942148Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" Jan 13 20:46:16.151078 containerd[1494]: time="2025-01-13T20:46:16.151021397Z" level=info msg="TearDown network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" successfully" Jan 13 20:46:16.151078 containerd[1494]: time="2025-01-13T20:46:16.151033019Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" returns successfully" Jan 13 20:46:16.151078 containerd[1494]: time="2025-01-13T20:46:16.151021768Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:16.151215 containerd[1494]: time="2025-01-13T20:46:16.151174114Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:16.151215 containerd[1494]: time="2025-01-13T20:46:16.151210442Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:16.152085 containerd[1494]: time="2025-01-13T20:46:16.151589213Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:16.152085 containerd[1494]: time="2025-01-13T20:46:16.151687848Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:16.152085 containerd[1494]: time="2025-01-13T20:46:16.151701473Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:16.152255 kubelet[2681]: E0113 20:46:16.151910 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.152435 systemd[1]: run-netns-cni\x2d6967f047\x2d9f2e\x2da3cb\x2d57ed\x2d7233eacf9f3c.mount: Deactivated successfully. Jan 13 20:46:16.152638 containerd[1494]: time="2025-01-13T20:46:16.152618566Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:16.152813 containerd[1494]: time="2025-01-13T20:46:16.152754771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:7,}" Jan 13 20:46:16.152813 containerd[1494]: time="2025-01-13T20:46:16.152778957Z" level=info msg="TearDown network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" successfully" Jan 13 20:46:16.152813 containerd[1494]: time="2025-01-13T20:46:16.152793724Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" returns successfully" Jan 13 20:46:16.153851 containerd[1494]: time="2025-01-13T20:46:16.153713401Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:16.153851 containerd[1494]: time="2025-01-13T20:46:16.153798640Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:16.153851 containerd[1494]: time="2025-01-13T20:46:16.153808088Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:16.154287 containerd[1494]: time="2025-01-13T20:46:16.154144280Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:16.154287 containerd[1494]: time="2025-01-13T20:46:16.154227826Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:16.154287 containerd[1494]: time="2025-01-13T20:46:16.154237064Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:16.154951 kubelet[2681]: E0113 20:46:16.154922 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.155157 containerd[1494]: time="2025-01-13T20:46:16.155113189Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:16.155300 containerd[1494]: time="2025-01-13T20:46:16.155280924Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:16.155351 containerd[1494]: time="2025-01-13T20:46:16.155297595Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:16.155599 containerd[1494]: time="2025-01-13T20:46:16.155566681Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:16.155694 containerd[1494]: time="2025-01-13T20:46:16.155667359Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:16.155694 containerd[1494]: time="2025-01-13T20:46:16.155685233Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:16.156063 containerd[1494]: time="2025-01-13T20:46:16.156038035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:7,}" Jan 13 20:46:16.157188 kubelet[2681]: I0113 20:46:16.157168 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5" Jan 13 20:46:16.158005 containerd[1494]: time="2025-01-13T20:46:16.157619024Z" level=info msg="StopPodSandbox for \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\"" Jan 13 20:46:16.158005 containerd[1494]: time="2025-01-13T20:46:16.157838936Z" level=info msg="Ensure that sandbox 8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5 in task-service has been cleanup successfully" Jan 13 20:46:16.158143 containerd[1494]: time="2025-01-13T20:46:16.158099626Z" level=info msg="TearDown network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" successfully" Jan 13 20:46:16.158182 containerd[1494]: time="2025-01-13T20:46:16.158140873Z" level=info msg="StopPodSandbox for \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" returns successfully" Jan 13 20:46:16.158587 containerd[1494]: time="2025-01-13T20:46:16.158554590Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" Jan 13 20:46:16.158665 containerd[1494]: time="2025-01-13T20:46:16.158646242Z" level=info msg="TearDown network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" successfully" Jan 13 20:46:16.158665 containerd[1494]: time="2025-01-13T20:46:16.158661771Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" returns successfully" Jan 13 20:46:16.158925 containerd[1494]: time="2025-01-13T20:46:16.158888978Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:16.159000 containerd[1494]: time="2025-01-13T20:46:16.158980800Z" level=info msg="TearDown network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" successfully" Jan 13 20:46:16.159000 containerd[1494]: time="2025-01-13T20:46:16.158997121Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" returns successfully" Jan 13 20:46:16.159504 containerd[1494]: time="2025-01-13T20:46:16.159479977Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:16.159755 containerd[1494]: time="2025-01-13T20:46:16.159584524Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:16.159755 containerd[1494]: time="2025-01-13T20:46:16.159632825Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:16.159950 containerd[1494]: time="2025-01-13T20:46:16.159927939Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:16.160103 containerd[1494]: time="2025-01-13T20:46:16.160079443Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:16.160103 containerd[1494]: time="2025-01-13T20:46:16.160099070Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:16.160494 containerd[1494]: time="2025-01-13T20:46:16.160469636Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:16.160724 containerd[1494]: time="2025-01-13T20:46:16.160674009Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:16.160724 containerd[1494]: time="2025-01-13T20:46:16.160695400Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:16.161180 containerd[1494]: time="2025-01-13T20:46:16.161023806Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:16.161180 containerd[1494]: time="2025-01-13T20:46:16.161111030Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:16.161180 containerd[1494]: time="2025-01-13T20:46:16.161122341Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:16.161482 kubelet[2681]: E0113 20:46:16.161445 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.161903 containerd[1494]: time="2025-01-13T20:46:16.161857200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:7,}" Jan 13 20:46:16.304394 kubelet[2681]: I0113 20:46:16.303934 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8cktq" podStartSLOduration=2.810590506 podStartE2EDuration="24.30389923s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:45:53.52819955 +0000 UTC m=+20.387286210" lastFinishedPulling="2025-01-13 20:46:15.021508274 +0000 UTC m=+41.880594934" observedRunningTime="2025-01-13 20:46:16.303032473 +0000 UTC m=+43.162119133" watchObservedRunningTime="2025-01-13 20:46:16.30389923 +0000 UTC m=+43.162985890" Jan 13 20:46:16.689871 systemd[1]: run-netns-cni\x2d3e629624\x2d851f\x2df1fe\x2d4da4\x2d04dbd9cc848e.mount: Deactivated successfully. Jan 13 20:46:16.690440 systemd[1]: run-netns-cni\x2d31d8c356\x2d8c03\x2d2cf4\x2d5157\x2d89abde71dc75.mount: Deactivated successfully. Jan 13 20:46:16.783370 systemd-networkd[1414]: calia014e95b53d: Link UP Jan 13 20:46:16.783628 systemd-networkd[1414]: calia014e95b53d: Gained carrier Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.394 [INFO][5180] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5180] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--sj5ll-eth0 coredns-76f75df574- kube-system ea7e48ee-74c8-4c04-8866-2bd72cdc56d3 805 0 2025-01-13 20:45:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-sj5ll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia014e95b53d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.646 [INFO][5180] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.731 [INFO][5280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" HandleID="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Workload="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" HandleID="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Workload="localhost-k8s-coredns--76f75df574--sj5ll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4bb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-sj5ll", "timestamp":"2025-01-13 20:46:16.73127638 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5280] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.747 [INFO][5280] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.750 [INFO][5280] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.753 [INFO][5280] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.754 [INFO][5280] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.756 [INFO][5280] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.756 [INFO][5280] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.760 [INFO][5280] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.763 [INFO][5280] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.768 [INFO][5280] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.768 [INFO][5280] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" host="localhost" Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.769 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:16.796056 containerd[1494]: 2025-01-13 20:46:16.769 [INFO][5280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" HandleID="k8s-pod-network.9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Workload="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.772 [INFO][5180] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--sj5ll-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ea7e48ee-74c8-4c04-8866-2bd72cdc56d3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-sj5ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia014e95b53d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.773 [INFO][5180] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.773 [INFO][5180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia014e95b53d ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.783 [INFO][5180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.783 [INFO][5180] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--sj5ll-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"ea7e48ee-74c8-4c04-8866-2bd72cdc56d3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d", Pod:"coredns-76f75df574-sj5ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia014e95b53d", MAC:"e6:7b:7d:8a:08:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:16.797179 containerd[1494]: 2025-01-13 20:46:16.792 [INFO][5180] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d" Namespace="kube-system" Pod="coredns-76f75df574-sj5ll" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--sj5ll-eth0" Jan 13 20:46:16.809277 systemd-networkd[1414]: cali2d26df1b70d: Link UP Jan 13 20:46:16.809710 systemd-networkd[1414]: cali2d26df1b70d: Gained carrier Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.423 [INFO][5207] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5207] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0 calico-kube-controllers-7995746cb4- calico-system 20a94580-01ed-434d-8ae3-3bc7fd6089f3 799 0 2025-01-13 20:45:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7995746cb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7995746cb4-vtxf5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2d26df1b70d [] []}} ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.646 [INFO][5207] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.735 [INFO][5278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" HandleID="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Workload="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.744 [INFO][5278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" HandleID="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Workload="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042a2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7995746cb4-vtxf5", "timestamp":"2025-01-13 20:46:16.735000372 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.744 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.768 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.768 [INFO][5278] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.770 [INFO][5278] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.773 [INFO][5278] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.776 [INFO][5278] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.778 [INFO][5278] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.781 [INFO][5278] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.781 [INFO][5278] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.785 [INFO][5278] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.789 [INFO][5278] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5278] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5278] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" host="localhost" Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:16.824284 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" HandleID="k8s-pod-network.f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Workload="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.799 [INFO][5207] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0", GenerateName:"calico-kube-controllers-7995746cb4-", Namespace:"calico-system", SelfLink:"", UID:"20a94580-01ed-434d-8ae3-3bc7fd6089f3", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995746cb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7995746cb4-vtxf5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2d26df1b70d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.800 [INFO][5207] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.800 [INFO][5207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d26df1b70d ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.807 [INFO][5207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.807 [INFO][5207] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0", GenerateName:"calico-kube-controllers-7995746cb4-", Namespace:"calico-system", SelfLink:"", UID:"20a94580-01ed-434d-8ae3-3bc7fd6089f3", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995746cb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a", Pod:"calico-kube-controllers-7995746cb4-vtxf5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2d26df1b70d", MAC:"16:3b:55:93:4a:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:16.824886 containerd[1494]: 2025-01-13 20:46:16.818 [INFO][5207] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a" Namespace="calico-system" Pod="calico-kube-controllers-7995746cb4-vtxf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7995746cb4--vtxf5-eth0" Jan 13 20:46:16.893232 containerd[1494]: time="2025-01-13T20:46:16.893119537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:16.893232 containerd[1494]: time="2025-01-13T20:46:16.893185460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:16.893232 containerd[1494]: time="2025-01-13T20:46:16.893199126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:16.893434 containerd[1494]: time="2025-01-13T20:46:16.893291649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:16.917600 systemd[1]: Started cri-containerd-9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d.scope - libcontainer container 9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d. Jan 13 20:46:16.929859 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:16.944618 systemd-networkd[1414]: califf8956b48a0: Link UP Jan 13 20:46:16.947415 containerd[1494]: time="2025-01-13T20:46:16.947041794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:16.947415 containerd[1494]: time="2025-01-13T20:46:16.947161649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:16.947415 containerd[1494]: time="2025-01-13T20:46:16.947200723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:16.948161 systemd-networkd[1414]: califf8956b48a0: Gained carrier Jan 13 20:46:16.949759 containerd[1494]: time="2025-01-13T20:46:16.949686920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:16.976702 systemd[1]: Started cri-containerd-f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a.scope - libcontainer container f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a. Jan 13 20:46:16.977665 containerd[1494]: time="2025-01-13T20:46:16.977626365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sj5ll,Uid:ea7e48ee-74c8-4c04-8866-2bd72cdc56d3,Namespace:kube-system,Attempt:7,} returns sandbox id \"9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d\"" Jan 13 20:46:16.978285 kubelet[2681]: E0113 20:46:16.978251 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.980499 containerd[1494]: time="2025-01-13T20:46:16.980421363Z" level=info msg="CreateContainer within sandbox \"9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:16.993106 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.419 [INFO][5204] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5204] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0 calico-apiserver-7fcd56cf7c- calico-apiserver 164635ec-fca2-4958-bf9f-f8a81545fa24 804 0 2025-01-13 20:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd56cf7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fcd56cf7c-z54wj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf8956b48a0 [] []}} ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5204] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.734 [INFO][5282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" HandleID="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" HandleID="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a1380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fcd56cf7c-z54wj", "timestamp":"2025-01-13 20:46:16.734579051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.797 [INFO][5282] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.799 [INFO][5282] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.805 [INFO][5282] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.811 [INFO][5282] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.815 [INFO][5282] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.817 [INFO][5282] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.817 [INFO][5282] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.818 [INFO][5282] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.824 [INFO][5282] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5282] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5282] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" host="localhost" Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:17.005937 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" HandleID="k8s-pod-network.244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:16.938 [INFO][5204] cni-plugin/k8s.go 386: Populated endpoint ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0", GenerateName:"calico-apiserver-7fcd56cf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"164635ec-fca2-4958-bf9f-f8a81545fa24", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd56cf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fcd56cf7c-z54wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8956b48a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:16.938 [INFO][5204] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:16.938 [INFO][5204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf8956b48a0 ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:16.949 [INFO][5204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:16.950 [INFO][5204] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0", GenerateName:"calico-apiserver-7fcd56cf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"164635ec-fca2-4958-bf9f-f8a81545fa24", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd56cf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c", Pod:"calico-apiserver-7fcd56cf7c-z54wj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8956b48a0", MAC:"da:f8:83:2a:0e:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.006736 containerd[1494]: 2025-01-13 20:46:17.002 [INFO][5204] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-z54wj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--z54wj-eth0" Jan 13 20:46:17.021587 containerd[1494]: time="2025-01-13T20:46:17.021504463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995746cb4-vtxf5,Uid:20a94580-01ed-434d-8ae3-3bc7fd6089f3,Namespace:calico-system,Attempt:7,} returns sandbox id \"f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a\"" Jan 13 20:46:17.024125 containerd[1494]: time="2025-01-13T20:46:17.024078485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:46:17.039590 containerd[1494]: time="2025-01-13T20:46:17.039094004Z" level=info msg="CreateContainer within sandbox \"9e6d230f3649d2606ef2889e7b7fc1714078f14459eb2caae1d0c21ee4c7fe4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b39effc04969ec25fe4e0e755a39e240e6c609b00f9c233e69f275ad11be94b\"" Jan 13 20:46:17.039427 systemd-networkd[1414]: calia6b40da7967: Link UP Jan 13 20:46:17.042225 systemd-networkd[1414]: calia6b40da7967: Gained carrier Jan 13 20:46:17.043962 containerd[1494]: time="2025-01-13T20:46:17.043222845Z" level=info msg="StartContainer for \"4b39effc04969ec25fe4e0e755a39e240e6c609b00f9c233e69f275ad11be94b\"" Jan 13 20:46:17.047778 containerd[1494]: time="2025-01-13T20:46:17.047349101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:17.047778 containerd[1494]: time="2025-01-13T20:46:17.047447856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:17.047778 containerd[1494]: time="2025-01-13T20:46:17.047525432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.047778 containerd[1494]: time="2025-01-13T20:46:17.047634486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.417 [INFO][5191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5191] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n9xm5-eth0 csi-node-driver- calico-system 39e13210-d183-473d-999b-c81aa9bc8ccf 658 0 2025-01-13 20:45:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n9xm5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia6b40da7967 [] []}} ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.646 [INFO][5191] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.730 [INFO][5281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" HandleID="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Workload="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" HandleID="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Workload="localhost-k8s-csi--node--driver--n9xm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000517c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n9xm5", "timestamp":"2025-01-13 20:46:16.730387902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.745 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.934 [INFO][5281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.938 [INFO][5281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.946 [INFO][5281] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.956 [INFO][5281] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:16.959 [INFO][5281] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.003 [INFO][5281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.004 [INFO][5281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.020 [INFO][5281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.025 [INFO][5281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" host="localhost" Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:17.058546 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" HandleID="k8s-pod-network.2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Workload="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.036 [INFO][5191] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n9xm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39e13210-d183-473d-999b-c81aa9bc8ccf", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n9xm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6b40da7967", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.037 [INFO][5191] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.037 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6b40da7967 ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.039 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.044 [INFO][5191] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n9xm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39e13210-d183-473d-999b-c81aa9bc8ccf", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f", Pod:"csi-node-driver-n9xm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia6b40da7967", MAC:"ae:e5:68:22:25:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.059335 containerd[1494]: 2025-01-13 20:46:17.055 [INFO][5191] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f" Namespace="calico-system" Pod="csi-node-driver-n9xm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--n9xm5-eth0" Jan 13 20:46:17.075613 systemd[1]: Started cri-containerd-244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c.scope - libcontainer container 244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c. Jan 13 20:46:17.079653 systemd[1]: Started cri-containerd-4b39effc04969ec25fe4e0e755a39e240e6c609b00f9c233e69f275ad11be94b.scope - libcontainer container 4b39effc04969ec25fe4e0e755a39e240e6c609b00f9c233e69f275ad11be94b. Jan 13 20:46:17.096068 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:17.105378 systemd-networkd[1414]: cali545517f6b4d: Link UP Jan 13 20:46:17.108319 systemd-networkd[1414]: cali545517f6b4d: Gained carrier Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.389 [INFO][5168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5168] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0 calico-apiserver-7fcd56cf7c- calico-apiserver 005c7b7f-f680-4342-abb9-808a0c23c33a 806 0 2025-01-13 20:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd56cf7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fcd56cf7c-gf2sc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali545517f6b4d [] []}} ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.646 [INFO][5168] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.737 [INFO][5279] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" HandleID="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.748 [INFO][5279] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" HandleID="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028d500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fcd56cf7c-gf2sc", "timestamp":"2025-01-13 20:46:16.737819144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:16.748 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.033 [INFO][5279] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.037 [INFO][5279] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.045 [INFO][5279] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.055 [INFO][5279] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.057 [INFO][5279] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.060 [INFO][5279] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.060 [INFO][5279] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.062 [INFO][5279] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3 Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.087 [INFO][5279] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.094 [INFO][5279] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.094 [INFO][5279] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" host="localhost" Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.094 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:17.131529 containerd[1494]: 2025-01-13 20:46:17.094 [INFO][5279] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" HandleID="k8s-pod-network.2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Workload="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.099 [INFO][5168] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0", GenerateName:"calico-apiserver-7fcd56cf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"005c7b7f-f680-4342-abb9-808a0c23c33a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd56cf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fcd56cf7c-gf2sc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545517f6b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.099 [INFO][5168] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.099 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali545517f6b4d ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.113 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.114 [INFO][5168] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0", GenerateName:"calico-apiserver-7fcd56cf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"005c7b7f-f680-4342-abb9-808a0c23c33a", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd56cf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3", Pod:"calico-apiserver-7fcd56cf7c-gf2sc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545517f6b4d", MAC:"66:9b:bc:e3:5b:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.132769 containerd[1494]: 2025-01-13 20:46:17.126 [INFO][5168] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd56cf7c-gf2sc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fcd56cf7c--gf2sc-eth0" Jan 13 20:46:17.132968 containerd[1494]: time="2025-01-13T20:46:17.132882754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:17.132968 containerd[1494]: time="2025-01-13T20:46:17.132952296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:17.133018 containerd[1494]: time="2025-01-13T20:46:17.132966492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.133108 containerd[1494]: time="2025-01-13T20:46:17.133056080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.138034 containerd[1494]: time="2025-01-13T20:46:17.137985454Z" level=info msg="StartContainer for \"4b39effc04969ec25fe4e0e755a39e240e6c609b00f9c233e69f275ad11be94b\" returns successfully" Jan 13 20:46:17.167188 containerd[1494]: time="2025-01-13T20:46:17.166932447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-z54wj,Uid:164635ec-fca2-4958-bf9f-f8a81545fa24,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c\"" Jan 13 20:46:17.179528 kubelet[2681]: E0113 20:46:17.179048 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:17.180302 systemd[1]: Started cri-containerd-2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f.scope - libcontainer container 2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f. Jan 13 20:46:17.182049 kubelet[2681]: E0113 20:46:17.181684 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:17.201173 kubelet[2681]: I0113 20:46:17.199756 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-sj5ll" podStartSLOduration=31.199685164999998 podStartE2EDuration="31.199685165s" podCreationTimestamp="2025-01-13 20:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:17.197694287 +0000 UTC m=+44.056780957" watchObservedRunningTime="2025-01-13 20:46:17.199685165 +0000 UTC m=+44.058771825" Jan 13 20:46:17.225823 systemd-networkd[1414]: cali371e1660f34: Link UP Jan 13 20:46:17.226925 systemd-networkd[1414]: cali371e1660f34: Gained carrier Jan 13 20:46:17.255304 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.418 [INFO][5205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5205] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--zjvgd-eth0 coredns-76f75df574- kube-system 82db675e-45a2-40cb-aaa5-0e3781350d23 803 0 2025-01-13 20:45:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-zjvgd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali371e1660f34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.645 [INFO][5205] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.744 [INFO][5277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" HandleID="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Workload="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.750 [INFO][5277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" HandleID="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Workload="localhost-k8s-coredns--76f75df574--zjvgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e4230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-zjvgd", "timestamp":"2025-01-13 20:46:16.744177481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:16.750 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.094 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.095 [INFO][5277] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.100 [INFO][5277] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.107 [INFO][5277] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.125 [INFO][5277] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.127 [INFO][5277] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.134 [INFO][5277] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.135 [INFO][5277] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.142 [INFO][5277] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094 Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.153 [INFO][5277] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.163 [INFO][5277] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.163 [INFO][5277] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" host="localhost" Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.163 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:17.265048 containerd[1494]: 2025-01-13 20:46:17.163 [INFO][5277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" HandleID="k8s-pod-network.67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Workload="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.188 [INFO][5205] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zjvgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82db675e-45a2-40cb-aaa5-0e3781350d23", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-zjvgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371e1660f34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.189 [INFO][5205] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.189 [INFO][5205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali371e1660f34 ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.228 [INFO][5205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.234 [INFO][5205] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zjvgd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"82db675e-45a2-40cb-aaa5-0e3781350d23", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094", Pod:"coredns-76f75df574-zjvgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali371e1660f34", MAC:"5e:72:88:a3:99:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:17.266189 containerd[1494]: 2025-01-13 20:46:17.249 [INFO][5205] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094" Namespace="kube-system" Pod="coredns-76f75df574-zjvgd" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zjvgd-eth0" Jan 13 20:46:17.266777 containerd[1494]: time="2025-01-13T20:46:17.266191007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:17.266777 containerd[1494]: time="2025-01-13T20:46:17.266287287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:17.266777 containerd[1494]: time="2025-01-13T20:46:17.266307545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.266777 containerd[1494]: time="2025-01-13T20:46:17.266411551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.305727 systemd[1]: Started cri-containerd-2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3.scope - libcontainer container 2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3. Jan 13 20:46:17.307258 containerd[1494]: time="2025-01-13T20:46:17.306331052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n9xm5,Uid:39e13210-d183-473d-999b-c81aa9bc8ccf,Namespace:calico-system,Attempt:7,} returns sandbox id \"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f\"" Jan 13 20:46:17.334193 containerd[1494]: time="2025-01-13T20:46:17.333789320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:17.334193 containerd[1494]: time="2025-01-13T20:46:17.333872747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:17.334193 containerd[1494]: time="2025-01-13T20:46:17.333888847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.334193 containerd[1494]: time="2025-01-13T20:46:17.333977865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:17.344504 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:17.373754 systemd[1]: Started cri-containerd-67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094.scope - libcontainer container 67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094. Jan 13 20:46:17.394864 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:17.429720 containerd[1494]: time="2025-01-13T20:46:17.429257865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd56cf7c-gf2sc,Uid:005c7b7f-f680-4342-abb9-808a0c23c33a,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3\"" Jan 13 20:46:17.449270 containerd[1494]: time="2025-01-13T20:46:17.449108992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zjvgd,Uid:82db675e-45a2-40cb-aaa5-0e3781350d23,Namespace:kube-system,Attempt:7,} returns sandbox id \"67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094\"" Jan 13 20:46:17.450378 kubelet[2681]: E0113 20:46:17.450304 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:17.454834 containerd[1494]: time="2025-01-13T20:46:17.454520983Z" level=info msg="CreateContainer within sandbox \"67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:17.473493 kernel: bpftool[5807]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:46:17.481650 containerd[1494]: time="2025-01-13T20:46:17.481580041Z" level=info msg="CreateContainer within sandbox \"67df647ea1f04f9347051b9b02a8e73be649e436aea8cc607735330cb4996094\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc4fe46321da384d9b085313b5806d04e2dc3939d45e2676bad5310c1fdfc4a\"" Jan 13 20:46:17.482602 containerd[1494]: time="2025-01-13T20:46:17.482284183Z" level=info msg="StartContainer for \"0fc4fe46321da384d9b085313b5806d04e2dc3939d45e2676bad5310c1fdfc4a\"" Jan 13 20:46:17.519842 systemd[1]: Started cri-containerd-0fc4fe46321da384d9b085313b5806d04e2dc3939d45e2676bad5310c1fdfc4a.scope - libcontainer container 0fc4fe46321da384d9b085313b5806d04e2dc3939d45e2676bad5310c1fdfc4a. Jan 13 20:46:17.555865 containerd[1494]: time="2025-01-13T20:46:17.555268137Z" level=info msg="StartContainer for \"0fc4fe46321da384d9b085313b5806d04e2dc3939d45e2676bad5310c1fdfc4a\" returns successfully" Jan 13 20:46:17.744761 systemd-networkd[1414]: vxlan.calico: Link UP Jan 13 20:46:17.744771 systemd-networkd[1414]: vxlan.calico: Gained carrier Jan 13 20:46:18.071702 systemd-networkd[1414]: calia014e95b53d: Gained IPv6LL Jan 13 20:46:18.186038 kubelet[2681]: E0113 20:46:18.186006 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:18.192360 kubelet[2681]: E0113 20:46:18.191956 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:18.192360 kubelet[2681]: E0113 20:46:18.192039 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:18.197992 kubelet[2681]: I0113 20:46:18.197228 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zjvgd" podStartSLOduration=32.197189279 podStartE2EDuration="32.197189279s" podCreationTimestamp="2025-01-13 20:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:18.195310463 +0000 UTC m=+45.054397123" watchObservedRunningTime="2025-01-13 20:46:18.197189279 +0000 UTC m=+45.056275949" Jan 13 20:46:18.391610 systemd-networkd[1414]: califf8956b48a0: Gained IPv6LL Jan 13 20:46:18.519656 systemd-networkd[1414]: calia6b40da7967: Gained IPv6LL Jan 13 20:46:18.711671 systemd-networkd[1414]: cali2d26df1b70d: Gained IPv6LL Jan 13 20:46:19.095674 systemd-networkd[1414]: cali371e1660f34: Gained IPv6LL Jan 13 20:46:19.096486 systemd-networkd[1414]: cali545517f6b4d: Gained IPv6LL Jan 13 20:46:19.194425 kubelet[2681]: E0113 20:46:19.194387 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:19.195608 kubelet[2681]: E0113 20:46:19.195260 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:19.571949 containerd[1494]: time="2025-01-13T20:46:19.571818891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:19.574725 containerd[1494]: time="2025-01-13T20:46:19.574668350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:46:19.576234 containerd[1494]: time="2025-01-13T20:46:19.576194214Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:19.578608 containerd[1494]: time="2025-01-13T20:46:19.578569823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:19.579240 containerd[1494]: time="2025-01-13T20:46:19.579202811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.554950169s" Jan 13 20:46:19.579240 containerd[1494]: time="2025-01-13T20:46:19.579227778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:46:19.579754 containerd[1494]: time="2025-01-13T20:46:19.579720603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:19.587699 containerd[1494]: time="2025-01-13T20:46:19.587649826Z" level=info msg="CreateContainer within sandbox \"f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:46:19.621072 containerd[1494]: time="2025-01-13T20:46:19.621012809Z" level=info msg="CreateContainer within sandbox \"f4cbbe56e6093272eedf258a0085da60c666fe660becfcaf8fd5ad759e4df30a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e\"" Jan 13 20:46:19.621561 containerd[1494]: time="2025-01-13T20:46:19.621532234Z" level=info msg="StartContainer for \"63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e\"" Jan 13 20:46:19.647628 systemd[1]: Started cri-containerd-63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e.scope - libcontainer container 63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e. Jan 13 20:46:19.672610 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL Jan 13 20:46:19.766941 containerd[1494]: time="2025-01-13T20:46:19.766890638Z" level=info msg="StartContainer for \"63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e\" returns successfully" Jan 13 20:46:19.799246 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:53880.service - OpenSSH per-connection server daemon (10.0.0.1:53880). Jan 13 20:46:19.870963 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 53880 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:19.873377 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:19.877750 systemd-logind[1485]: New session 11 of user core. Jan 13 20:46:19.888612 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:46:20.012120 sshd[6021]: Connection closed by 10.0.0.1 port 53880 Jan 13 20:46:20.012524 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:20.026421 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:53880.service: Deactivated successfully. Jan 13 20:46:20.028317 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:46:20.028895 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:46:20.035813 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:53896.service - OpenSSH per-connection server daemon (10.0.0.1:53896). Jan 13 20:46:20.036964 systemd-logind[1485]: Removed session 11. Jan 13 20:46:20.070962 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 53896 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:20.072397 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:20.076412 systemd-logind[1485]: New session 12 of user core. Jan 13 20:46:20.085581 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:46:20.252685 sshd[6036]: Connection closed by 10.0.0.1 port 53896 Jan 13 20:46:20.252923 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:20.262861 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:53896.service: Deactivated successfully. Jan 13 20:46:20.265055 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:46:20.265807 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:46:20.273803 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:53912.service - OpenSSH per-connection server daemon (10.0.0.1:53912). Jan 13 20:46:20.274368 systemd-logind[1485]: Removed session 12. Jan 13 20:46:20.310572 sshd[6068]: Accepted publickey for core from 10.0.0.1 port 53912 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:20.312130 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:20.316276 systemd-logind[1485]: New session 13 of user core. Jan 13 20:46:20.329579 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:46:20.368978 kubelet[2681]: I0113 20:46:20.368437 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7995746cb4-vtxf5" podStartSLOduration=25.812434622 podStartE2EDuration="28.36838003s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:17.023587593 +0000 UTC m=+43.882674253" lastFinishedPulling="2025-01-13 20:46:19.579533001 +0000 UTC m=+46.438619661" observedRunningTime="2025-01-13 20:46:20.230201972 +0000 UTC m=+47.089288642" watchObservedRunningTime="2025-01-13 20:46:20.36838003 +0000 UTC m=+47.227466690" Jan 13 20:46:20.451420 sshd[6070]: Connection closed by 10.0.0.1 port 53912 Jan 13 20:46:20.451806 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:20.457265 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:53912.service: Deactivated successfully. Jan 13 20:46:20.459682 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:46:20.460423 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:46:20.461526 systemd-logind[1485]: Removed session 13. Jan 13 20:46:22.180137 containerd[1494]: time="2025-01-13T20:46:22.180061363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:22.220571 containerd[1494]: time="2025-01-13T20:46:22.220485005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:46:22.278281 containerd[1494]: time="2025-01-13T20:46:22.278240924Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:22.311645 containerd[1494]: time="2025-01-13T20:46:22.311584329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:22.312512 containerd[1494]: time="2025-01-13T20:46:22.312472747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.732694085s" Jan 13 20:46:22.312569 containerd[1494]: time="2025-01-13T20:46:22.312512992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:22.313410 containerd[1494]: time="2025-01-13T20:46:22.313329113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:46:22.314261 containerd[1494]: time="2025-01-13T20:46:22.314235996Z" level=info msg="CreateContainer within sandbox \"244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:22.724988 containerd[1494]: time="2025-01-13T20:46:22.724940156Z" level=info msg="CreateContainer within sandbox \"244a26be04846d6685d18f39f41532fafeae94bfeeb9090117f73fcb320de38c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f215bbf3381e6a581b01b554550f7b8f8d155e0f01854efb390e35e413b1b698\"" Jan 13 20:46:22.725705 containerd[1494]: time="2025-01-13T20:46:22.725445855Z" level=info msg="StartContainer for \"f215bbf3381e6a581b01b554550f7b8f8d155e0f01854efb390e35e413b1b698\"" Jan 13 20:46:22.758602 systemd[1]: Started cri-containerd-f215bbf3381e6a581b01b554550f7b8f8d155e0f01854efb390e35e413b1b698.scope - libcontainer container f215bbf3381e6a581b01b554550f7b8f8d155e0f01854efb390e35e413b1b698. Jan 13 20:46:22.894183 containerd[1494]: time="2025-01-13T20:46:22.894049109Z" level=info msg="StartContainer for \"f215bbf3381e6a581b01b554550f7b8f8d155e0f01854efb390e35e413b1b698\" returns successfully" Jan 13 20:46:23.244647 kubelet[2681]: I0113 20:46:23.244334 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-z54wj" podStartSLOduration=26.10190111 podStartE2EDuration="31.244271067s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:17.170447406 +0000 UTC m=+44.029534066" lastFinishedPulling="2025-01-13 20:46:22.312817363 +0000 UTC m=+49.171904023" observedRunningTime="2025-01-13 20:46:23.24279069 +0000 UTC m=+50.101877340" watchObservedRunningTime="2025-01-13 20:46:23.244271067 +0000 UTC m=+50.103357727" Jan 13 20:46:25.166317 containerd[1494]: time="2025-01-13T20:46:25.166234856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.195739 containerd[1494]: time="2025-01-13T20:46:25.195677501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:46:25.225003 containerd[1494]: time="2025-01-13T20:46:25.224967640Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.241062 containerd[1494]: time="2025-01-13T20:46:25.240999353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.241607 containerd[1494]: time="2025-01-13T20:46:25.241570624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.928195705s" Jan 13 20:46:25.241607 containerd[1494]: time="2025-01-13T20:46:25.241597875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:46:25.242339 containerd[1494]: time="2025-01-13T20:46:25.241968471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:25.243815 containerd[1494]: time="2025-01-13T20:46:25.243785781Z" level=info msg="CreateContainer within sandbox \"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:46:25.285770 containerd[1494]: time="2025-01-13T20:46:25.285713424Z" level=info msg="CreateContainer within sandbox \"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"925622f02ee1a30b5689c9acba27694cc1debb97df2df840c797e5d284ac9fd2\"" Jan 13 20:46:25.286186 containerd[1494]: time="2025-01-13T20:46:25.286148249Z" level=info msg="StartContainer for \"925622f02ee1a30b5689c9acba27694cc1debb97df2df840c797e5d284ac9fd2\"" Jan 13 20:46:25.325605 systemd[1]: Started cri-containerd-925622f02ee1a30b5689c9acba27694cc1debb97df2df840c797e5d284ac9fd2.scope - libcontainer container 925622f02ee1a30b5689c9acba27694cc1debb97df2df840c797e5d284ac9fd2. Jan 13 20:46:25.365341 containerd[1494]: time="2025-01-13T20:46:25.365174904Z" level=info msg="StartContainer for \"925622f02ee1a30b5689c9acba27694cc1debb97df2df840c797e5d284ac9fd2\" returns successfully" Jan 13 20:46:25.475031 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:41644.service - OpenSSH per-connection server daemon (10.0.0.1:41644). Jan 13 20:46:25.533294 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 41644 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:25.535149 sshd-session[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:25.539860 systemd-logind[1485]: New session 14 of user core. Jan 13 20:46:25.545695 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:46:25.686935 sshd[6192]: Connection closed by 10.0.0.1 port 41644 Jan 13 20:46:25.687323 sshd-session[6190]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:25.692375 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:41644.service: Deactivated successfully. Jan 13 20:46:25.695128 containerd[1494]: time="2025-01-13T20:46:25.695063210Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.695701 containerd[1494]: time="2025-01-13T20:46:25.695651194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:46:25.696342 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:46:25.697187 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:46:25.698313 systemd-logind[1485]: Removed session 14. Jan 13 20:46:25.699204 containerd[1494]: time="2025-01-13T20:46:25.699157503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 457.145541ms" Jan 13 20:46:25.699204 containerd[1494]: time="2025-01-13T20:46:25.699199752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:25.699862 containerd[1494]: time="2025-01-13T20:46:25.699824254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:46:25.701777 containerd[1494]: time="2025-01-13T20:46:25.701735501Z" level=info msg="CreateContainer within sandbox \"2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:25.717066 containerd[1494]: time="2025-01-13T20:46:25.717001826Z" level=info msg="CreateContainer within sandbox \"2810651908106993a2b07177daa0f505947a2f9c9ddc54cde99658a5607baed3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0a35a3e5f53af6c91078b00c2c9d0f47c964a7043504ab1d457f35c8766d09d8\"" Jan 13 20:46:25.717896 containerd[1494]: time="2025-01-13T20:46:25.717856490Z" level=info msg="StartContainer for \"0a35a3e5f53af6c91078b00c2c9d0f47c964a7043504ab1d457f35c8766d09d8\"" Jan 13 20:46:25.747607 systemd[1]: Started cri-containerd-0a35a3e5f53af6c91078b00c2c9d0f47c964a7043504ab1d457f35c8766d09d8.scope - libcontainer container 0a35a3e5f53af6c91078b00c2c9d0f47c964a7043504ab1d457f35c8766d09d8. Jan 13 20:46:25.798357 containerd[1494]: time="2025-01-13T20:46:25.798316042Z" level=info msg="StartContainer for \"0a35a3e5f53af6c91078b00c2c9d0f47c964a7043504ab1d457f35c8766d09d8\" returns successfully" Jan 13 20:46:26.251271 kubelet[2681]: I0113 20:46:26.251202 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fcd56cf7c-gf2sc" podStartSLOduration=25.984845314 podStartE2EDuration="34.25115451s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:17.433316115 +0000 UTC m=+44.292402775" lastFinishedPulling="2025-01-13 20:46:25.699625301 +0000 UTC m=+52.558711971" observedRunningTime="2025-01-13 20:46:26.250775959 +0000 UTC m=+53.109862609" watchObservedRunningTime="2025-01-13 20:46:26.25115451 +0000 UTC m=+53.110241170" Jan 13 20:46:27.224882 kubelet[2681]: I0113 20:46:27.224839 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:27.255786 containerd[1494]: time="2025-01-13T20:46:27.255722536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.256545 containerd[1494]: time="2025-01-13T20:46:27.256493332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:46:27.257598 containerd[1494]: time="2025-01-13T20:46:27.257531511Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.259663 containerd[1494]: time="2025-01-13T20:46:27.259627553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:27.260327 containerd[1494]: time="2025-01-13T20:46:27.260291820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.560431347s" Jan 13 20:46:27.260327 containerd[1494]: time="2025-01-13T20:46:27.260319351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:46:27.262145 containerd[1494]: time="2025-01-13T20:46:27.262116052Z" level=info msg="CreateContainer within sandbox \"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:46:27.276882 containerd[1494]: time="2025-01-13T20:46:27.276833345Z" level=info msg="CreateContainer within sandbox \"2b64fc061b898d5dc495f44e125ef3ccd1662898e330b6f9654cc01305f7248f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7edfb783a627851e8ec5f7c9765829975e753909f5f88a774761031db2335656\"" Jan 13 20:46:27.277369 containerd[1494]: time="2025-01-13T20:46:27.277279833Z" level=info msg="StartContainer for \"7edfb783a627851e8ec5f7c9765829975e753909f5f88a774761031db2335656\"" Jan 13 20:46:27.319887 systemd[1]: Started cri-containerd-7edfb783a627851e8ec5f7c9765829975e753909f5f88a774761031db2335656.scope - libcontainer container 7edfb783a627851e8ec5f7c9765829975e753909f5f88a774761031db2335656. Jan 13 20:46:27.357960 containerd[1494]: time="2025-01-13T20:46:27.357912477Z" level=info msg="StartContainer for \"7edfb783a627851e8ec5f7c9765829975e753909f5f88a774761031db2335656\" returns successfully" Jan 13 20:46:28.241565 kubelet[2681]: I0113 20:46:28.241425 2681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-n9xm5" podStartSLOduration=26.288594295 podStartE2EDuration="36.241371579s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:17.307845065 +0000 UTC m=+44.166931725" lastFinishedPulling="2025-01-13 20:46:27.260622349 +0000 UTC m=+54.119709009" observedRunningTime="2025-01-13 20:46:28.23988495 +0000 UTC m=+55.098971610" watchObservedRunningTime="2025-01-13 20:46:28.241371579 +0000 UTC m=+55.100458239" Jan 13 20:46:28.310189 kubelet[2681]: I0113 20:46:28.310144 2681 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:46:28.311197 kubelet[2681]: I0113 20:46:28.311169 2681 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:46:30.703375 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:41654.service - OpenSSH per-connection server daemon (10.0.0.1:41654). Jan 13 20:46:30.752090 sshd[6295]: Accepted publickey for core from 10.0.0.1 port 41654 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:30.753824 sshd-session[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:30.757853 systemd-logind[1485]: New session 15 of user core. Jan 13 20:46:30.773595 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:46:30.896932 sshd[6297]: Connection closed by 10.0.0.1 port 41654 Jan 13 20:46:30.897349 sshd-session[6295]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:30.907514 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:41654.service: Deactivated successfully. Jan 13 20:46:30.909581 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:46:30.911069 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:46:30.915747 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). Jan 13 20:46:30.916646 systemd-logind[1485]: Removed session 15. Jan 13 20:46:30.951144 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:30.952845 sshd-session[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:30.956931 systemd-logind[1485]: New session 16 of user core. Jan 13 20:46:30.967598 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:46:31.171620 sshd[6311]: Connection closed by 10.0.0.1 port 41668 Jan 13 20:46:31.172063 sshd-session[6309]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:31.192643 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:41668.service: Deactivated successfully. Jan 13 20:46:31.194982 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:46:31.196984 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:46:31.204755 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:51368.service - OpenSSH per-connection server daemon (10.0.0.1:51368). Jan 13 20:46:31.205717 systemd-logind[1485]: Removed session 16. Jan 13 20:46:31.240880 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 51368 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:31.242365 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:31.246171 systemd-logind[1485]: New session 17 of user core. Jan 13 20:46:31.252567 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:46:32.919701 sshd[6324]: Connection closed by 10.0.0.1 port 51368 Jan 13 20:46:32.921323 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:32.945147 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:51372.service - OpenSSH per-connection server daemon (10.0.0.1:51372). Jan 13 20:46:32.946854 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:51368.service: Deactivated successfully. Jan 13 20:46:32.950793 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:46:32.954195 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:46:32.956098 systemd-logind[1485]: Removed session 17. Jan 13 20:46:32.990201 sshd[6339]: Accepted publickey for core from 10.0.0.1 port 51372 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:32.992193 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:32.997085 systemd-logind[1485]: New session 18 of user core. Jan 13 20:46:33.008626 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:46:33.210557 containerd[1494]: time="2025-01-13T20:46:33.210394716Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:33.210557 containerd[1494]: time="2025-01-13T20:46:33.210547543Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:33.210557 containerd[1494]: time="2025-01-13T20:46:33.210559856Z" level=info msg="StopPodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:33.217228 containerd[1494]: time="2025-01-13T20:46:33.217191637Z" level=info msg="RemovePodSandbox for \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:33.230491 containerd[1494]: time="2025-01-13T20:46:33.230355420Z" level=info msg="Forcibly stopping sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\"" Jan 13 20:46:33.231252 containerd[1494]: time="2025-01-13T20:46:33.230723139Z" level=info msg="TearDown network for sandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" successfully" Jan 13 20:46:33.238907 sshd[6344]: Connection closed by 10.0.0.1 port 51372 Jan 13 20:46:33.239990 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:33.246850 containerd[1494]: time="2025-01-13T20:46:33.246812510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.247051 containerd[1494]: time="2025-01-13T20:46:33.247030979Z" level=info msg="RemovePodSandbox \"abbad30cef253516c39179a49c5e55223161d30a70d6c612a6bb114a7894a3c6\" returns successfully" Jan 13 20:46:33.247741 containerd[1494]: time="2025-01-13T20:46:33.247702269Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:33.247820 containerd[1494]: time="2025-01-13T20:46:33.247801465Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:33.247820 containerd[1494]: time="2025-01-13T20:46:33.247816824Z" level=info msg="StopPodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:33.250510 containerd[1494]: time="2025-01-13T20:46:33.248516807Z" level=info msg="RemovePodSandbox for \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:33.250510 containerd[1494]: time="2025-01-13T20:46:33.248543658Z" level=info msg="Forcibly stopping sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\"" Jan 13 20:46:33.250510 containerd[1494]: time="2025-01-13T20:46:33.248618769Z" level=info msg="TearDown network for sandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" successfully" Jan 13 20:46:33.250104 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:51372.service: Deactivated successfully. Jan 13 20:46:33.252303 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:46:33.253295 containerd[1494]: time="2025-01-13T20:46:33.253244676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.253361 containerd[1494]: time="2025-01-13T20:46:33.253349633Z" level=info msg="RemovePodSandbox \"93cd3dfe84dc802a45c7fa5cb6f00263c47c081948a2192c83c40f19b0ce4991\" returns successfully" Jan 13 20:46:33.253294 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:46:33.254062 containerd[1494]: time="2025-01-13T20:46:33.253841716Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:33.254062 containerd[1494]: time="2025-01-13T20:46:33.253968634Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:33.254062 containerd[1494]: time="2025-01-13T20:46:33.253981858Z" level=info msg="StopPodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:33.256385 containerd[1494]: time="2025-01-13T20:46:33.254343287Z" level=info msg="RemovePodSandbox for \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:33.256385 containerd[1494]: time="2025-01-13T20:46:33.254368424Z" level=info msg="Forcibly stopping sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\"" Jan 13 20:46:33.256385 containerd[1494]: time="2025-01-13T20:46:33.254444777Z" level=info msg="TearDown network for sandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" successfully" Jan 13 20:46:33.258592 containerd[1494]: time="2025-01-13T20:46:33.258534028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.258737 containerd[1494]: time="2025-01-13T20:46:33.258602456Z" level=info msg="RemovePodSandbox \"d340afcd2ab4fd752fb60b5142ca77cc11c10e55fa6f0957f7b3336e2d6afadb\" returns successfully" Jan 13 20:46:33.258973 containerd[1494]: time="2025-01-13T20:46:33.258952101Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:33.261147 containerd[1494]: time="2025-01-13T20:46:33.259173687Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:33.261147 containerd[1494]: time="2025-01-13T20:46:33.259222158Z" level=info msg="StopPodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:33.261147 containerd[1494]: time="2025-01-13T20:46:33.259429648Z" level=info msg="RemovePodSandbox for \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:33.261147 containerd[1494]: time="2025-01-13T20:46:33.259446419Z" level=info msg="Forcibly stopping sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\"" Jan 13 20:46:33.261147 containerd[1494]: time="2025-01-13T20:46:33.259551476Z" level=info msg="TearDown network for sandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" successfully" Jan 13 20:46:33.262948 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:51380.service - OpenSSH per-connection server daemon (10.0.0.1:51380). Jan 13 20:46:33.263855 containerd[1494]: time="2025-01-13T20:46:33.263763296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.263855 containerd[1494]: time="2025-01-13T20:46:33.263801117Z" level=info msg="RemovePodSandbox \"23194723d7f09a6b02ee8b028c9f41dbb9a379a51bf0039eae4432776103d4c9\" returns successfully" Jan 13 20:46:33.264326 containerd[1494]: time="2025-01-13T20:46:33.264130496Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:33.264326 containerd[1494]: time="2025-01-13T20:46:33.264236735Z" level=info msg="TearDown network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" successfully" Jan 13 20:46:33.264326 containerd[1494]: time="2025-01-13T20:46:33.264247485Z" level=info msg="StopPodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" returns successfully" Jan 13 20:46:33.264259 systemd-logind[1485]: Removed session 18. Jan 13 20:46:33.264565 containerd[1494]: time="2025-01-13T20:46:33.264542679Z" level=info msg="RemovePodSandbox for \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:33.264598 containerd[1494]: time="2025-01-13T20:46:33.264567876Z" level=info msg="Forcibly stopping sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\"" Jan 13 20:46:33.264676 containerd[1494]: time="2025-01-13T20:46:33.264639220Z" level=info msg="TearDown network for sandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" successfully" Jan 13 20:46:33.271285 containerd[1494]: time="2025-01-13T20:46:33.271246744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.271472 containerd[1494]: time="2025-01-13T20:46:33.271435359Z" level=info msg="RemovePodSandbox \"6e14386429269b04c1f1d0682d189b8767b2ce2c0c51b08f81a77d8730305798\" returns successfully" Jan 13 20:46:33.272795 containerd[1494]: time="2025-01-13T20:46:33.272607858Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" Jan 13 20:46:33.272795 containerd[1494]: time="2025-01-13T20:46:33.272704670Z" level=info msg="TearDown network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" successfully" Jan 13 20:46:33.272795 containerd[1494]: time="2025-01-13T20:46:33.272737792Z" level=info msg="StopPodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" returns successfully" Jan 13 20:46:33.273186 containerd[1494]: time="2025-01-13T20:46:33.273145266Z" level=info msg="RemovePodSandbox for \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" Jan 13 20:46:33.273231 containerd[1494]: time="2025-01-13T20:46:33.273197083Z" level=info msg="Forcibly stopping sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\"" Jan 13 20:46:33.273369 containerd[1494]: time="2025-01-13T20:46:33.273315234Z" level=info msg="TearDown network for sandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" successfully" Jan 13 20:46:33.278167 containerd[1494]: time="2025-01-13T20:46:33.278084290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.278167 containerd[1494]: time="2025-01-13T20:46:33.278149994Z" level=info msg="RemovePodSandbox \"72c07b803376ebd7309253f0aeccf87b9cac3ea8cb9d3118c4909336027dd971\" returns successfully" Jan 13 20:46:33.278663 containerd[1494]: time="2025-01-13T20:46:33.278612091Z" level=info msg="StopPodSandbox for \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\"" Jan 13 20:46:33.280270 containerd[1494]: time="2025-01-13T20:46:33.280230326Z" level=info msg="TearDown network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" successfully" Jan 13 20:46:33.280270 containerd[1494]: time="2025-01-13T20:46:33.280256976Z" level=info msg="StopPodSandbox for \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" returns successfully" Jan 13 20:46:33.281302 containerd[1494]: time="2025-01-13T20:46:33.280726927Z" level=info msg="RemovePodSandbox for \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\"" Jan 13 20:46:33.281302 containerd[1494]: time="2025-01-13T20:46:33.280761051Z" level=info msg="Forcibly stopping sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\"" Jan 13 20:46:33.281302 containerd[1494]: time="2025-01-13T20:46:33.280843195Z" level=info msg="TearDown network for sandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" successfully" Jan 13 20:46:33.287021 containerd[1494]: time="2025-01-13T20:46:33.286972854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.287125 containerd[1494]: time="2025-01-13T20:46:33.287053015Z" level=info msg="RemovePodSandbox \"8b92c6ff13e868fa66a8829660a82a010578752a97948089d9ff0c013a5f65b5\" returns successfully" Jan 13 20:46:33.287810 containerd[1494]: time="2025-01-13T20:46:33.287621691Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:33.287810 containerd[1494]: time="2025-01-13T20:46:33.287729453Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:33.287810 containerd[1494]: time="2025-01-13T20:46:33.287739923Z" level=info msg="StopPodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:33.288560 containerd[1494]: time="2025-01-13T20:46:33.288508344Z" level=info msg="RemovePodSandbox for \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:33.288560 containerd[1494]: time="2025-01-13T20:46:33.288554922Z" level=info msg="Forcibly stopping sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\"" Jan 13 20:46:33.288704 containerd[1494]: time="2025-01-13T20:46:33.288655721Z" level=info msg="TearDown network for sandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" successfully" Jan 13 20:46:33.293171 containerd[1494]: time="2025-01-13T20:46:33.293106420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.293421 containerd[1494]: time="2025-01-13T20:46:33.293180348Z" level=info msg="RemovePodSandbox \"3a6ce2eee02d1dbcb45ada3fd7cbcdb48e120e97042041c570dbe8e21f15fe61\" returns successfully" Jan 13 20:46:33.293494 containerd[1494]: time="2025-01-13T20:46:33.293440226Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:33.293608 containerd[1494]: time="2025-01-13T20:46:33.293572614Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:33.293608 containerd[1494]: time="2025-01-13T20:46:33.293591850Z" level=info msg="StopPodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:33.294132 containerd[1494]: time="2025-01-13T20:46:33.294080416Z" level=info msg="RemovePodSandbox for \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:33.294132 containerd[1494]: time="2025-01-13T20:46:33.294107056Z" level=info msg="Forcibly stopping sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\"" Jan 13 20:46:33.294240 containerd[1494]: time="2025-01-13T20:46:33.294175564Z" level=info msg="TearDown network for sandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" successfully" Jan 13 20:46:33.304101 sshd[6357]: Accepted publickey for core from 10.0.0.1 port 51380 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:33.305830 sshd-session[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:33.309843 systemd-logind[1485]: New session 19 of user core. Jan 13 20:46:33.322773 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:46:33.357362 containerd[1494]: time="2025-01-13T20:46:33.357282332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.357362 containerd[1494]: time="2025-01-13T20:46:33.357386047Z" level=info msg="RemovePodSandbox \"e7cb965d13504f1ff277e861fa11b4977497013625c6f50ffb55f5f6f5af4083\" returns successfully" Jan 13 20:46:33.357972 containerd[1494]: time="2025-01-13T20:46:33.357943723Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:33.358111 containerd[1494]: time="2025-01-13T20:46:33.358079768Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:33.358111 containerd[1494]: time="2025-01-13T20:46:33.358099354Z" level=info msg="StopPodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:33.359355 containerd[1494]: time="2025-01-13T20:46:33.358432761Z" level=info msg="RemovePodSandbox for \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:33.359355 containerd[1494]: time="2025-01-13T20:46:33.358481973Z" level=info msg="Forcibly stopping sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\"" Jan 13 20:46:33.359355 containerd[1494]: time="2025-01-13T20:46:33.358577201Z" level=info msg="TearDown network for sandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" successfully" Jan 13 20:46:33.439917 containerd[1494]: time="2025-01-13T20:46:33.439864442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.440112 containerd[1494]: time="2025-01-13T20:46:33.439942388Z" level=info msg="RemovePodSandbox \"e2564878151b36d49e130f84a0cb180bcfb9b7ad55b15f086ef09be580f0daa7\" returns successfully" Jan 13 20:46:33.440518 containerd[1494]: time="2025-01-13T20:46:33.440493051Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:33.440743 containerd[1494]: time="2025-01-13T20:46:33.440678128Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:33.440743 containerd[1494]: time="2025-01-13T20:46:33.440697324Z" level=info msg="StopPodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:33.441028 containerd[1494]: time="2025-01-13T20:46:33.440972110Z" level=info msg="RemovePodSandbox for \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:33.441141 sshd[6359]: Connection closed by 10.0.0.1 port 51380 Jan 13 20:46:33.441544 containerd[1494]: time="2025-01-13T20:46:33.441086855Z" level=info msg="Forcibly stopping sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\"" Jan 13 20:46:33.441544 containerd[1494]: time="2025-01-13T20:46:33.441200798Z" level=info msg="TearDown network for sandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" successfully" Jan 13 20:46:33.441602 sshd-session[6357]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:33.446788 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:51380.service: Deactivated successfully. Jan 13 20:46:33.449739 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:46:33.450470 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:46:33.451475 systemd-logind[1485]: Removed session 19. Jan 13 20:46:33.511080 containerd[1494]: time="2025-01-13T20:46:33.510855168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.511080 containerd[1494]: time="2025-01-13T20:46:33.510978469Z" level=info msg="RemovePodSandbox \"7b8f775ea15e4ee770173b643914782e26b5a5caeb7e696096ee7d10d84f3b28\" returns successfully" Jan 13 20:46:33.514147 containerd[1494]: time="2025-01-13T20:46:33.513656433Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:33.514147 containerd[1494]: time="2025-01-13T20:46:33.513810411Z" level=info msg="TearDown network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" successfully" Jan 13 20:46:33.514147 containerd[1494]: time="2025-01-13T20:46:33.513824919Z" level=info msg="StopPodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" returns successfully" Jan 13 20:46:33.515513 containerd[1494]: time="2025-01-13T20:46:33.514826598Z" level=info msg="RemovePodSandbox for \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:33.515513 containerd[1494]: time="2025-01-13T20:46:33.514859469Z" level=info msg="Forcibly stopping sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\"" Jan 13 20:46:33.515513 containerd[1494]: time="2025-01-13T20:46:33.515082427Z" level=info msg="TearDown network for sandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" successfully" Jan 13 20:46:33.543768 containerd[1494]: time="2025-01-13T20:46:33.543714226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.543974 containerd[1494]: time="2025-01-13T20:46:33.543785259Z" level=info msg="RemovePodSandbox \"f4988e2facaa5e89e34462d75a75366ea46a47be23bbe281a0602bb806f05655\" returns successfully" Jan 13 20:46:33.545011 containerd[1494]: time="2025-01-13T20:46:33.544811054Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" Jan 13 20:46:33.545011 containerd[1494]: time="2025-01-13T20:46:33.544994017Z" level=info msg="TearDown network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" successfully" Jan 13 20:46:33.545011 containerd[1494]: time="2025-01-13T20:46:33.545012802Z" level=info msg="StopPodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" returns successfully" Jan 13 20:46:33.546331 containerd[1494]: time="2025-01-13T20:46:33.545799838Z" level=info msg="RemovePodSandbox for \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" Jan 13 20:46:33.546413 containerd[1494]: time="2025-01-13T20:46:33.546347816Z" level=info msg="Forcibly stopping sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\"" Jan 13 20:46:33.546752 containerd[1494]: time="2025-01-13T20:46:33.546447653Z" level=info msg="TearDown network for sandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" successfully" Jan 13 20:46:33.551324 containerd[1494]: time="2025-01-13T20:46:33.551263397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.551483 containerd[1494]: time="2025-01-13T20:46:33.551335512Z" level=info msg="RemovePodSandbox \"9455748307147426aeb2d41669eaf527ef4bfd69931e548b49fd11d5cd5b9b0a\" returns successfully" Jan 13 20:46:33.551950 containerd[1494]: time="2025-01-13T20:46:33.551907205Z" level=info msg="StopPodSandbox for \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\"" Jan 13 20:46:33.552091 containerd[1494]: time="2025-01-13T20:46:33.552067766Z" level=info msg="TearDown network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" successfully" Jan 13 20:46:33.552091 containerd[1494]: time="2025-01-13T20:46:33.552088144Z" level=info msg="StopPodSandbox for \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" returns successfully" Jan 13 20:46:33.552522 containerd[1494]: time="2025-01-13T20:46:33.552489446Z" level=info msg="RemovePodSandbox for \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\"" Jan 13 20:46:33.552595 containerd[1494]: time="2025-01-13T20:46:33.552524222Z" level=info msg="Forcibly stopping sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\"" Jan 13 20:46:33.552703 containerd[1494]: time="2025-01-13T20:46:33.552652292Z" level=info msg="TearDown network for sandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" successfully" Jan 13 20:46:33.574220 containerd[1494]: time="2025-01-13T20:46:33.574148575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.574409 containerd[1494]: time="2025-01-13T20:46:33.574242171Z" level=info msg="RemovePodSandbox \"b7f80dbe1d0aa22df1b273b026c5c55223aa9925211a0c4f37189e9dfbc20a52\" returns successfully" Jan 13 20:46:33.574923 containerd[1494]: time="2025-01-13T20:46:33.574876461Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:33.575314 containerd[1494]: time="2025-01-13T20:46:33.575010753Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:33.575314 containerd[1494]: time="2025-01-13T20:46:33.575025230Z" level=info msg="StopPodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:33.575314 containerd[1494]: time="2025-01-13T20:46:33.575289486Z" level=info msg="RemovePodSandbox for \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:33.575314 containerd[1494]: time="2025-01-13T20:46:33.575312268Z" level=info msg="Forcibly stopping sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\"" Jan 13 20:46:33.575507 containerd[1494]: time="2025-01-13T20:46:33.575425521Z" level=info msg="TearDown network for sandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" successfully" Jan 13 20:46:33.599741 containerd[1494]: time="2025-01-13T20:46:33.599662254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.599741 containerd[1494]: time="2025-01-13T20:46:33.599742345Z" level=info msg="RemovePodSandbox \"62cf4521bb012b5e64016bc0646c7c71363d67c6edf7c7ed7b2d7524c5538528\" returns successfully" Jan 13 20:46:33.600289 containerd[1494]: time="2025-01-13T20:46:33.600237714Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:33.600434 containerd[1494]: time="2025-01-13T20:46:33.600393857Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:33.600434 containerd[1494]: time="2025-01-13T20:46:33.600408014Z" level=info msg="StopPodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:33.600732 containerd[1494]: time="2025-01-13T20:46:33.600706063Z" level=info msg="RemovePodSandbox for \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:33.600732 containerd[1494]: time="2025-01-13T20:46:33.600730518Z" level=info msg="Forcibly stopping sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\"" Jan 13 20:46:33.600863 containerd[1494]: time="2025-01-13T20:46:33.600810809Z" level=info msg="TearDown network for sandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" successfully" Jan 13 20:46:33.605197 containerd[1494]: time="2025-01-13T20:46:33.605133909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.605197 containerd[1494]: time="2025-01-13T20:46:33.605205363Z" level=info msg="RemovePodSandbox \"9a088f960440b27f4b8b8972c6b1aac6a03dfdaf3fcf4d835109207e312980ab\" returns successfully" Jan 13 20:46:33.605830 containerd[1494]: time="2025-01-13T20:46:33.605770943Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:33.605979 containerd[1494]: time="2025-01-13T20:46:33.605944539Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:33.605979 containerd[1494]: time="2025-01-13T20:46:33.605968213Z" level=info msg="StopPodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:33.606301 containerd[1494]: time="2025-01-13T20:46:33.606266303Z" level=info msg="RemovePodSandbox for \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:33.606387 containerd[1494]: time="2025-01-13T20:46:33.606300517Z" level=info msg="Forcibly stopping sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\"" Jan 13 20:46:33.606512 containerd[1494]: time="2025-01-13T20:46:33.606418969Z" level=info msg="TearDown network for sandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" successfully" Jan 13 20:46:33.611603 containerd[1494]: time="2025-01-13T20:46:33.611540356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.611603 containerd[1494]: time="2025-01-13T20:46:33.611606119Z" level=info msg="RemovePodSandbox \"b29ea4b8ecb412b6b7852a89467ef266b6174480d671a47334df6cf74e35b558\" returns successfully" Jan 13 20:46:33.612101 containerd[1494]: time="2025-01-13T20:46:33.612051043Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:33.612235 containerd[1494]: time="2025-01-13T20:46:33.612213739Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:33.612235 containerd[1494]: time="2025-01-13T20:46:33.612230741Z" level=info msg="StopPodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:33.612686 containerd[1494]: time="2025-01-13T20:46:33.612661709Z" level=info msg="RemovePodSandbox for \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:33.612751 containerd[1494]: time="2025-01-13T20:46:33.612690373Z" level=info msg="Forcibly stopping sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\"" Jan 13 20:46:33.612824 containerd[1494]: time="2025-01-13T20:46:33.612770153Z" level=info msg="TearDown network for sandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" successfully" Jan 13 20:46:33.617713 containerd[1494]: time="2025-01-13T20:46:33.617638064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.617713 containerd[1494]: time="2025-01-13T20:46:33.617718004Z" level=info msg="RemovePodSandbox \"ef2ba296ab38c14c97936fd6bb7a3b32911279f126062594ea13709688af05c2\" returns successfully" Jan 13 20:46:33.618299 containerd[1494]: time="2025-01-13T20:46:33.618262907Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:33.618437 containerd[1494]: time="2025-01-13T20:46:33.618413940Z" level=info msg="TearDown network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" successfully" Jan 13 20:46:33.618437 containerd[1494]: time="2025-01-13T20:46:33.618427565Z" level=info msg="StopPodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" returns successfully" Jan 13 20:46:33.618832 containerd[1494]: time="2025-01-13T20:46:33.618794283Z" level=info msg="RemovePodSandbox for \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:33.618832 containerd[1494]: time="2025-01-13T20:46:33.618828016Z" level=info msg="Forcibly stopping sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\"" Jan 13 20:46:33.618965 containerd[1494]: time="2025-01-13T20:46:33.618916652Z" level=info msg="TearDown network for sandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" successfully" Jan 13 20:46:33.623005 containerd[1494]: time="2025-01-13T20:46:33.622952292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.623005 containerd[1494]: time="2025-01-13T20:46:33.623008519Z" level=info msg="RemovePodSandbox \"bae0e25d2f9b977698993d131fdea5d30eb50ca13b21dcc9ee6215999b72f79a\" returns successfully" Jan 13 20:46:33.623341 containerd[1494]: time="2025-01-13T20:46:33.623308281Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" Jan 13 20:46:33.623449 containerd[1494]: time="2025-01-13T20:46:33.623411745Z" level=info msg="TearDown network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" successfully" Jan 13 20:46:33.623449 containerd[1494]: time="2025-01-13T20:46:33.623426713Z" level=info msg="StopPodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" returns successfully" Jan 13 20:46:33.623866 containerd[1494]: time="2025-01-13T20:46:33.623836441Z" level=info msg="RemovePodSandbox for \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" Jan 13 20:46:33.623919 containerd[1494]: time="2025-01-13T20:46:33.623869423Z" level=info msg="Forcibly stopping sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\"" Jan 13 20:46:33.624015 containerd[1494]: time="2025-01-13T20:46:33.623966114Z" level=info msg="TearDown network for sandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" successfully" Jan 13 20:46:33.628406 containerd[1494]: time="2025-01-13T20:46:33.628357582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.628515 containerd[1494]: time="2025-01-13T20:46:33.628423937Z" level=info msg="RemovePodSandbox \"bb651b88f2691163bcfd7af2d7b2af5b6838a6454553e32ce0360a64cc3e4f05\" returns successfully" Jan 13 20:46:33.628750 containerd[1494]: time="2025-01-13T20:46:33.628723077Z" level=info msg="StopPodSandbox for \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\"" Jan 13 20:46:33.628847 containerd[1494]: time="2025-01-13T20:46:33.628828265Z" level=info msg="TearDown network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" successfully" Jan 13 20:46:33.628847 containerd[1494]: time="2025-01-13T20:46:33.628842912Z" level=info msg="StopPodSandbox for \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" returns successfully" Jan 13 20:46:33.629295 containerd[1494]: time="2025-01-13T20:46:33.629263542Z" level=info msg="RemovePodSandbox for \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\"" Jan 13 20:46:33.629377 containerd[1494]: time="2025-01-13T20:46:33.629297636Z" level=info msg="Forcibly stopping sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\"" Jan 13 20:46:33.629444 containerd[1494]: time="2025-01-13T20:46:33.629403735Z" level=info msg="TearDown network for sandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" successfully" Jan 13 20:46:33.634162 containerd[1494]: time="2025-01-13T20:46:33.634125382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.634226 containerd[1494]: time="2025-01-13T20:46:33.634166389Z" level=info msg="RemovePodSandbox \"4f86cc6df5f8f706f2b658ac490172aa51f28d586bae4ba9d9de89881a7e15c4\" returns successfully" Jan 13 20:46:33.634621 containerd[1494]: time="2025-01-13T20:46:33.634581477Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:33.634792 containerd[1494]: time="2025-01-13T20:46:33.634688548Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:33.634792 containerd[1494]: time="2025-01-13T20:46:33.634704127Z" level=info msg="StopPodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:33.635014 containerd[1494]: time="2025-01-13T20:46:33.634985595Z" level=info msg="RemovePodSandbox for \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:33.635054 containerd[1494]: time="2025-01-13T20:46:33.635019428Z" level=info msg="Forcibly stopping sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\"" Jan 13 20:46:33.635164 containerd[1494]: time="2025-01-13T20:46:33.635117423Z" level=info msg="TearDown network for sandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" successfully" Jan 13 20:46:33.649231 containerd[1494]: time="2025-01-13T20:46:33.649174541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.649231 containerd[1494]: time="2025-01-13T20:46:33.649226799Z" level=info msg="RemovePodSandbox \"db9ca827637482aa723d2c1b315d1a41816325df25ce5145f83e830f90be3c0c\" returns successfully" Jan 13 20:46:33.649694 containerd[1494]: time="2025-01-13T20:46:33.649644483Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:33.649808 containerd[1494]: time="2025-01-13T20:46:33.649783213Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:33.649808 containerd[1494]: time="2025-01-13T20:46:33.649799564Z" level=info msg="StopPodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:33.650170 containerd[1494]: time="2025-01-13T20:46:33.650136946Z" level=info msg="RemovePodSandbox for \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:33.650230 containerd[1494]: time="2025-01-13T20:46:33.650174988Z" level=info msg="Forcibly stopping sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\"" Jan 13 20:46:33.650323 containerd[1494]: time="2025-01-13T20:46:33.650276979Z" level=info msg="TearDown network for sandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" successfully" Jan 13 20:46:33.655028 containerd[1494]: time="2025-01-13T20:46:33.654986173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.655086 containerd[1494]: time="2025-01-13T20:46:33.655032069Z" level=info msg="RemovePodSandbox \"23db0ff86a7f632dc0b199faffe86cb86814df83e77e483fd37ccce2915e2c09\" returns successfully" Jan 13 20:46:33.655565 containerd[1494]: time="2025-01-13T20:46:33.655517890Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:33.655725 containerd[1494]: time="2025-01-13T20:46:33.655693510Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:33.655725 containerd[1494]: time="2025-01-13T20:46:33.655714599Z" level=info msg="StopPodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:33.656117 containerd[1494]: time="2025-01-13T20:46:33.656081167Z" level=info msg="RemovePodSandbox for \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:33.656117 containerd[1494]: time="2025-01-13T20:46:33.656110943Z" level=info msg="Forcibly stopping sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\"" Jan 13 20:46:33.656284 containerd[1494]: time="2025-01-13T20:46:33.656209027Z" level=info msg="TearDown network for sandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" successfully" Jan 13 20:46:33.660578 containerd[1494]: time="2025-01-13T20:46:33.660543127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.660648 containerd[1494]: time="2025-01-13T20:46:33.660604191Z" level=info msg="RemovePodSandbox \"935765464b8016a4c81f0eb71ea16b47b374a4b20e958b96b41c525799571aa0\" returns successfully" Jan 13 20:46:33.661082 containerd[1494]: time="2025-01-13T20:46:33.661029108Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:33.661230 containerd[1494]: time="2025-01-13T20:46:33.661197965Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:33.661230 containerd[1494]: time="2025-01-13T20:46:33.661215498Z" level=info msg="StopPodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:33.661685 containerd[1494]: time="2025-01-13T20:46:33.661648390Z" level=info msg="RemovePodSandbox for \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:33.661767 containerd[1494]: time="2025-01-13T20:46:33.661693936Z" level=info msg="Forcibly stopping sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\"" Jan 13 20:46:33.661858 containerd[1494]: time="2025-01-13T20:46:33.661808360Z" level=info msg="TearDown network for sandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" successfully" Jan 13 20:46:33.666381 containerd[1494]: time="2025-01-13T20:46:33.666317538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.666381 containerd[1494]: time="2025-01-13T20:46:33.666395464Z" level=info msg="RemovePodSandbox \"6648a75daa73b6327f9e49b4f9188a757daf2a956a1af93888cc712d8a01401c\" returns successfully" Jan 13 20:46:33.666900 containerd[1494]: time="2025-01-13T20:46:33.666853794Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:33.667065 containerd[1494]: time="2025-01-13T20:46:33.666981324Z" level=info msg="TearDown network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" successfully" Jan 13 20:46:33.667065 containerd[1494]: time="2025-01-13T20:46:33.666995791Z" level=info msg="StopPodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" returns successfully" Jan 13 20:46:33.667313 containerd[1494]: time="2025-01-13T20:46:33.667275916Z" level=info msg="RemovePodSandbox for \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:33.667313 containerd[1494]: time="2025-01-13T20:46:33.667303638Z" level=info msg="Forcibly stopping sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\"" Jan 13 20:46:33.667469 containerd[1494]: time="2025-01-13T20:46:33.667395320Z" level=info msg="TearDown network for sandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" successfully" Jan 13 20:46:33.672447 containerd[1494]: time="2025-01-13T20:46:33.672386282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.672447 containerd[1494]: time="2025-01-13T20:46:33.672473716Z" level=info msg="RemovePodSandbox \"c8c7769d0c6dee735c91a3faad49a47fc36e69813630851ead08a428fe2e9bc2\" returns successfully" Jan 13 20:46:33.673046 containerd[1494]: time="2025-01-13T20:46:33.673003009Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" Jan 13 20:46:33.673199 containerd[1494]: time="2025-01-13T20:46:33.673143733Z" level=info msg="TearDown network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" successfully" Jan 13 20:46:33.673199 containerd[1494]: time="2025-01-13T20:46:33.673158841Z" level=info msg="StopPodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" returns successfully" Jan 13 20:46:33.673612 containerd[1494]: time="2025-01-13T20:46:33.673569411Z" level=info msg="RemovePodSandbox for \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" Jan 13 20:46:33.673612 containerd[1494]: time="2025-01-13T20:46:33.673613424Z" level=info msg="Forcibly stopping sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\"" Jan 13 20:46:33.673889 containerd[1494]: time="2025-01-13T20:46:33.673722178Z" level=info msg="TearDown network for sandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" successfully" Jan 13 20:46:33.678125 containerd[1494]: time="2025-01-13T20:46:33.678068351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.678192 containerd[1494]: time="2025-01-13T20:46:33.678131750Z" level=info msg="RemovePodSandbox \"922a90aa476c646d08954b680f202804767966e9d487560a097dcf41134db412\" returns successfully" Jan 13 20:46:33.678837 containerd[1494]: time="2025-01-13T20:46:33.678778002Z" level=info msg="StopPodSandbox for \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\"" Jan 13 20:46:33.678976 containerd[1494]: time="2025-01-13T20:46:33.678912424Z" level=info msg="TearDown network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" successfully" Jan 13 20:46:33.678976 containerd[1494]: time="2025-01-13T20:46:33.678930328Z" level=info msg="StopPodSandbox for \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" returns successfully" Jan 13 20:46:33.679302 containerd[1494]: time="2025-01-13T20:46:33.679259756Z" level=info msg="RemovePodSandbox for \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\"" Jan 13 20:46:33.679302 containerd[1494]: time="2025-01-13T20:46:33.679295112Z" level=info msg="Forcibly stopping sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\"" Jan 13 20:46:33.679494 containerd[1494]: time="2025-01-13T20:46:33.679394278Z" level=info msg="TearDown network for sandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" successfully" Jan 13 20:46:33.684183 containerd[1494]: time="2025-01-13T20:46:33.684128729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.684256 containerd[1494]: time="2025-01-13T20:46:33.684204761Z" level=info msg="RemovePodSandbox \"acce6d65f7793a6ac9fa2bd9d4389f805db039eab299fc211cc4f6f10e06c69d\" returns successfully" Jan 13 20:46:33.684623 containerd[1494]: time="2025-01-13T20:46:33.684589092Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:33.684797 containerd[1494]: time="2025-01-13T20:46:33.684764211Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:33.684797 containerd[1494]: time="2025-01-13T20:46:33.684788547Z" level=info msg="StopPodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:33.685436 containerd[1494]: time="2025-01-13T20:46:33.685211099Z" level=info msg="RemovePodSandbox for \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:33.685436 containerd[1494]: time="2025-01-13T20:46:33.685283044Z" level=info msg="Forcibly stopping sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\"" Jan 13 20:46:33.685436 containerd[1494]: time="2025-01-13T20:46:33.685403250Z" level=info msg="TearDown network for sandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" successfully" Jan 13 20:46:33.690599 containerd[1494]: time="2025-01-13T20:46:33.690531038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.690599 containerd[1494]: time="2025-01-13T20:46:33.690605378Z" level=info msg="RemovePodSandbox \"8ef7867126011ca5d6761ce9ca5541f1c8551d5f663263ce1d3676f8f59bdd62\" returns successfully" Jan 13 20:46:33.691059 containerd[1494]: time="2025-01-13T20:46:33.691022461Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:33.691169 containerd[1494]: time="2025-01-13T20:46:33.691139700Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:33.691169 containerd[1494]: time="2025-01-13T20:46:33.691160820Z" level=info msg="StopPodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:33.693172 containerd[1494]: time="2025-01-13T20:46:33.691544920Z" level=info msg="RemovePodSandbox for \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:33.693172 containerd[1494]: time="2025-01-13T20:46:33.691594153Z" level=info msg="Forcibly stopping sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\"" Jan 13 20:46:33.693172 containerd[1494]: time="2025-01-13T20:46:33.691713035Z" level=info msg="TearDown network for sandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" successfully" Jan 13 20:46:33.695727 containerd[1494]: time="2025-01-13T20:46:33.695690116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.695850 containerd[1494]: time="2025-01-13T20:46:33.695741282Z" level=info msg="RemovePodSandbox \"5e3ab1a36058f753bfc36603ca408f27413fecb66d8e5780f5ae6135b750a451\" returns successfully" Jan 13 20:46:33.696060 containerd[1494]: time="2025-01-13T20:46:33.696018132Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:33.696173 containerd[1494]: time="2025-01-13T20:46:33.696125713Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:33.696173 containerd[1494]: time="2025-01-13T20:46:33.696146222Z" level=info msg="StopPodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:33.696717 containerd[1494]: time="2025-01-13T20:46:33.696677137Z" level=info msg="RemovePodSandbox for \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:33.696717 containerd[1494]: time="2025-01-13T20:46:33.696710740Z" level=info msg="Forcibly stopping sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\"" Jan 13 20:46:33.696871 containerd[1494]: time="2025-01-13T20:46:33.696793667Z" level=info msg="TearDown network for sandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" successfully" Jan 13 20:46:33.700627 containerd[1494]: time="2025-01-13T20:46:33.700575761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.700627 containerd[1494]: time="2025-01-13T20:46:33.700625705Z" level=info msg="RemovePodSandbox \"688f76848039fded11cbfc86009ec1143bcad502698fcb678c468205b2ebc193\" returns successfully" Jan 13 20:46:33.701066 containerd[1494]: time="2025-01-13T20:46:33.701017700Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:33.701179 containerd[1494]: time="2025-01-13T20:46:33.701154957Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:33.701179 containerd[1494]: time="2025-01-13T20:46:33.701170386Z" level=info msg="StopPodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:33.701527 containerd[1494]: time="2025-01-13T20:46:33.701494734Z" level=info msg="RemovePodSandbox for \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:33.701576 containerd[1494]: time="2025-01-13T20:46:33.701535831Z" level=info msg="Forcibly stopping sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\"" Jan 13 20:46:33.701692 containerd[1494]: time="2025-01-13T20:46:33.701641400Z" level=info msg="TearDown network for sandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" successfully" Jan 13 20:46:33.706263 containerd[1494]: time="2025-01-13T20:46:33.706224256Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.706347 containerd[1494]: time="2025-01-13T20:46:33.706281944Z" level=info msg="RemovePodSandbox \"460dc422a3382e54b5e0b549f690c6a7c197011efab51366ec65f0077ab0caa1\" returns successfully" Jan 13 20:46:33.706715 containerd[1494]: time="2025-01-13T20:46:33.706671596Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:33.706798 containerd[1494]: time="2025-01-13T20:46:33.706779248Z" level=info msg="TearDown network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" successfully" Jan 13 20:46:33.706827 containerd[1494]: time="2025-01-13T20:46:33.706792162Z" level=info msg="StopPodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" returns successfully" Jan 13 20:46:33.707106 containerd[1494]: time="2025-01-13T20:46:33.707079521Z" level=info msg="RemovePodSandbox for \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:33.707106 containerd[1494]: time="2025-01-13T20:46:33.707099749Z" level=info msg="Forcibly stopping sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\"" Jan 13 20:46:33.707274 containerd[1494]: time="2025-01-13T20:46:33.707174348Z" level=info msg="TearDown network for sandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" successfully" Jan 13 20:46:33.710938 containerd[1494]: time="2025-01-13T20:46:33.710905177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.711003 containerd[1494]: time="2025-01-13T20:46:33.710952115Z" level=info msg="RemovePodSandbox \"9973d3bfd8bfefa8a4d71cecffc32db4434f758d14539321ffca358c47a0f538\" returns successfully" Jan 13 20:46:33.711254 containerd[1494]: time="2025-01-13T20:46:33.711227482Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" Jan 13 20:46:33.711338 containerd[1494]: time="2025-01-13T20:46:33.711320647Z" level=info msg="TearDown network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" successfully" Jan 13 20:46:33.711338 containerd[1494]: time="2025-01-13T20:46:33.711331377Z" level=info msg="StopPodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" returns successfully" Jan 13 20:46:33.711616 containerd[1494]: time="2025-01-13T20:46:33.711592156Z" level=info msg="RemovePodSandbox for \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" Jan 13 20:46:33.711676 containerd[1494]: time="2025-01-13T20:46:33.711626300Z" level=info msg="Forcibly stopping sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\"" Jan 13 20:46:33.711731 containerd[1494]: time="2025-01-13T20:46:33.711701541Z" level=info msg="TearDown network for sandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" successfully" Jan 13 20:46:33.715357 containerd[1494]: time="2025-01-13T20:46:33.715318325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.715357 containerd[1494]: time="2025-01-13T20:46:33.715374481Z" level=info msg="RemovePodSandbox \"e99b0113f5018033774685f937f96627405a03ddeb31c6d5035ca78fbd2e3152\" returns successfully" Jan 13 20:46:33.715721 containerd[1494]: time="2025-01-13T20:46:33.715691605Z" level=info msg="StopPodSandbox for \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\"" Jan 13 20:46:33.715850 containerd[1494]: time="2025-01-13T20:46:33.715817411Z" level=info msg="TearDown network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" successfully" Jan 13 20:46:33.715850 containerd[1494]: time="2025-01-13T20:46:33.715839312Z" level=info msg="StopPodSandbox for \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" returns successfully" Jan 13 20:46:33.716172 containerd[1494]: time="2025-01-13T20:46:33.716139105Z" level=info msg="RemovePodSandbox for \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\"" Jan 13 20:46:33.716172 containerd[1494]: time="2025-01-13T20:46:33.716164733Z" level=info msg="Forcibly stopping sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\"" Jan 13 20:46:33.716282 containerd[1494]: time="2025-01-13T20:46:33.716243631Z" level=info msg="TearDown network for sandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" successfully" Jan 13 20:46:33.720350 containerd[1494]: time="2025-01-13T20:46:33.720297395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.720350 containerd[1494]: time="2025-01-13T20:46:33.720340746Z" level=info msg="RemovePodSandbox \"f6f6df0120ddf91f24a486463e83b98feddacc9d5a718bad8ee481e8dc2d2de7\" returns successfully" Jan 13 20:46:33.720682 containerd[1494]: time="2025-01-13T20:46:33.720651118Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:33.720877 containerd[1494]: time="2025-01-13T20:46:33.720848098Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:33.720877 containerd[1494]: time="2025-01-13T20:46:33.720864458Z" level=info msg="StopPodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:33.721171 containerd[1494]: time="2025-01-13T20:46:33.721138112Z" level=info msg="RemovePodSandbox for \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:33.721171 containerd[1494]: time="2025-01-13T20:46:33.721162468Z" level=info msg="Forcibly stopping sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\"" Jan 13 20:46:33.721274 containerd[1494]: time="2025-01-13T20:46:33.721233441Z" level=info msg="TearDown network for sandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" successfully" Jan 13 20:46:33.725216 containerd[1494]: time="2025-01-13T20:46:33.725183290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.725271 containerd[1494]: time="2025-01-13T20:46:33.725235708Z" level=info msg="RemovePodSandbox \"d6cb46caedba65d9b99d47f08f99c4aba2850de8ebde2cfcdf10f4aae53df22c\" returns successfully" Jan 13 20:46:33.725580 containerd[1494]: time="2025-01-13T20:46:33.725559025Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:33.725671 containerd[1494]: time="2025-01-13T20:46:33.725651628Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:33.725671 containerd[1494]: time="2025-01-13T20:46:33.725667017Z" level=info msg="StopPodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:33.725886 containerd[1494]: time="2025-01-13T20:46:33.725865430Z" level=info msg="RemovePodSandbox for \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:33.725924 containerd[1494]: time="2025-01-13T20:46:33.725886910Z" level=info msg="Forcibly stopping sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\"" Jan 13 20:46:33.725989 containerd[1494]: time="2025-01-13T20:46:33.725953626Z" level=info msg="TearDown network for sandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" successfully" Jan 13 20:46:33.730031 containerd[1494]: time="2025-01-13T20:46:33.729990167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.730089 containerd[1494]: time="2025-01-13T20:46:33.730050240Z" level=info msg="RemovePodSandbox \"87259777d608ae9b1dd666c53ff12f857c21d2d07b940b206a78b979692bd0c1\" returns successfully" Jan 13 20:46:33.730344 containerd[1494]: time="2025-01-13T20:46:33.730307181Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:33.730440 containerd[1494]: time="2025-01-13T20:46:33.730420574Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:33.730440 containerd[1494]: time="2025-01-13T20:46:33.730437886Z" level=info msg="StopPodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:33.730743 containerd[1494]: time="2025-01-13T20:46:33.730699197Z" level=info msg="RemovePodSandbox for \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:33.730743 containerd[1494]: time="2025-01-13T20:46:33.730721189Z" level=info msg="Forcibly stopping sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\"" Jan 13 20:46:33.730812 containerd[1494]: time="2025-01-13T20:46:33.730785179Z" level=info msg="TearDown network for sandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" successfully" Jan 13 20:46:33.734473 containerd[1494]: time="2025-01-13T20:46:33.734414998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.734473 containerd[1494]: time="2025-01-13T20:46:33.734475020Z" level=info msg="RemovePodSandbox \"46b0e418bd457c351804c73d9feeb34a70e96b4aaedaa0ddcc7151ce63364a11\" returns successfully" Jan 13 20:46:33.734739 containerd[1494]: time="2025-01-13T20:46:33.734706705Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:33.734828 containerd[1494]: time="2025-01-13T20:46:33.734806011Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:33.734828 containerd[1494]: time="2025-01-13T20:46:33.734826008Z" level=info msg="StopPodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:33.735041 containerd[1494]: time="2025-01-13T20:46:33.735017137Z" level=info msg="RemovePodSandbox for \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:33.735076 containerd[1494]: time="2025-01-13T20:46:33.735041262Z" level=info msg="Forcibly stopping sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\"" Jan 13 20:46:33.735147 containerd[1494]: time="2025-01-13T20:46:33.735111904Z" level=info msg="TearDown network for sandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" successfully" Jan 13 20:46:33.738881 containerd[1494]: time="2025-01-13T20:46:33.738843073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.738881 containerd[1494]: time="2025-01-13T20:46:33.738882096Z" level=info msg="RemovePodSandbox \"97aea214f67d221bf28d675095bd99c4759b410ce577c1970bc03d8a1bf349b8\" returns successfully" Jan 13 20:46:33.739235 containerd[1494]: time="2025-01-13T20:46:33.739201106Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:33.739377 containerd[1494]: time="2025-01-13T20:46:33.739338263Z" level=info msg="TearDown network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" successfully" Jan 13 20:46:33.739408 containerd[1494]: time="2025-01-13T20:46:33.739362438Z" level=info msg="StopPodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" returns successfully" Jan 13 20:46:33.739674 containerd[1494]: time="2025-01-13T20:46:33.739649977Z" level=info msg="RemovePodSandbox for \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:33.739737 containerd[1494]: time="2025-01-13T20:46:33.739681196Z" level=info msg="Forcibly stopping sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\"" Jan 13 20:46:33.739840 containerd[1494]: time="2025-01-13T20:46:33.739820296Z" level=info msg="TearDown network for sandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" successfully" Jan 13 20:46:33.744032 containerd[1494]: time="2025-01-13T20:46:33.743996882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.744084 containerd[1494]: time="2025-01-13T20:46:33.744040073Z" level=info msg="RemovePodSandbox \"cf571810a608a2554c9b3f7bbb7cac049b45d93a96d1e40bda4b2dab712545d8\" returns successfully" Jan 13 20:46:33.744333 containerd[1494]: time="2025-01-13T20:46:33.744275364Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" Jan 13 20:46:33.744472 containerd[1494]: time="2025-01-13T20:46:33.744424624Z" level=info msg="TearDown network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" successfully" Jan 13 20:46:33.744501 containerd[1494]: time="2025-01-13T20:46:33.744447496Z" level=info msg="StopPodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" returns successfully" Jan 13 20:46:33.744729 containerd[1494]: time="2025-01-13T20:46:33.744693518Z" level=info msg="RemovePodSandbox for \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" Jan 13 20:46:33.744729 containerd[1494]: time="2025-01-13T20:46:33.744720198Z" level=info msg="Forcibly stopping sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\"" Jan 13 20:46:33.744847 containerd[1494]: time="2025-01-13T20:46:33.744793024Z" level=info msg="TearDown network for sandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" successfully" Jan 13 20:46:33.748490 containerd[1494]: time="2025-01-13T20:46:33.748430778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.748591 containerd[1494]: time="2025-01-13T20:46:33.748501200Z" level=info msg="RemovePodSandbox \"8b12b4dba99738ce1ce48324477b314ffa0c1d12cb8c51296dce493685a8b6f5\" returns successfully" Jan 13 20:46:33.748833 containerd[1494]: time="2025-01-13T20:46:33.748803197Z" level=info msg="StopPodSandbox for \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\"" Jan 13 20:46:33.749108 containerd[1494]: time="2025-01-13T20:46:33.749073644Z" level=info msg="TearDown network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" successfully" Jan 13 20:46:33.749108 containerd[1494]: time="2025-01-13T20:46:33.749091187Z" level=info msg="StopPodSandbox for \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" returns successfully" Jan 13 20:46:33.749520 containerd[1494]: time="2025-01-13T20:46:33.749489644Z" level=info msg="RemovePodSandbox for \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\"" Jan 13 20:46:33.749572 containerd[1494]: time="2025-01-13T20:46:33.749521294Z" level=info msg="Forcibly stopping sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\"" Jan 13 20:46:33.749716 containerd[1494]: time="2025-01-13T20:46:33.749656918Z" level=info msg="TearDown network for sandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" successfully" Jan 13 20:46:33.754349 containerd[1494]: time="2025-01-13T20:46:33.754285451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.754421 containerd[1494]: time="2025-01-13T20:46:33.754352326Z" level=info msg="RemovePodSandbox \"2dd288bef25209489980d46fc31a231837f8d4df850d1308a4b1a897d16c4ae6\" returns successfully" Jan 13 20:46:38.454620 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:51382.service - OpenSSH per-connection server daemon (10.0.0.1:51382). Jan 13 20:46:38.497503 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 51382 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:38.499445 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:38.504375 systemd-logind[1485]: New session 20 of user core. Jan 13 20:46:38.514705 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:46:38.634444 sshd[6407]: Connection closed by 10.0.0.1 port 51382 Jan 13 20:46:38.634873 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:38.639973 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:51382.service: Deactivated successfully. Jan 13 20:46:38.642341 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:46:38.643568 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:46:38.645004 systemd-logind[1485]: Removed session 20. Jan 13 20:46:38.870293 systemd[1]: run-containerd-runc-k8s.io-63c28e5cf5cfb3539d6f43786591e813d04ccfba570dba57a05d4900d6b8cd0e-runc.BPfhpN.mount: Deactivated successfully. Jan 13 20:46:43.647035 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:55386.service - OpenSSH per-connection server daemon (10.0.0.1:55386). Jan 13 20:46:43.688340 sshd[6440]: Accepted publickey for core from 10.0.0.1 port 55386 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:43.690120 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:43.694640 systemd-logind[1485]: New session 21 of user core. Jan 13 20:46:43.701647 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:46:43.822867 sshd[6442]: Connection closed by 10.0.0.1 port 55386 Jan 13 20:46:43.823266 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:43.827910 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:55386.service: Deactivated successfully. Jan 13 20:46:43.830131 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:46:43.830796 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:46:43.831860 systemd-logind[1485]: Removed session 21. Jan 13 20:46:46.352244 kubelet[2681]: E0113 20:46:46.352201 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:48.229917 kubelet[2681]: E0113 20:46:48.229852 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:48.838583 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:55398.service - OpenSSH per-connection server daemon (10.0.0.1:55398). Jan 13 20:46:48.893507 sshd[6480]: Accepted publickey for core from 10.0.0.1 port 55398 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:48.895830 sshd-session[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:48.900421 systemd-logind[1485]: New session 22 of user core. Jan 13 20:46:48.910596 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:46:49.028165 sshd[6482]: Connection closed by 10.0.0.1 port 55398 Jan 13 20:46:49.028556 sshd-session[6480]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:49.033093 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:55398.service: Deactivated successfully. Jan 13 20:46:49.035724 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:46:49.036508 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:46:49.037481 systemd-logind[1485]: Removed session 22. Jan 13 20:46:53.230577 kubelet[2681]: E0113 20:46:53.230520 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:53.296259 kubelet[2681]: I0113 20:46:53.296217 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:54.042978 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:60000.service - OpenSSH per-connection server daemon (10.0.0.1:60000). Jan 13 20:46:54.090986 sshd[6497]: Accepted publickey for core from 10.0.0.1 port 60000 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:46:54.092981 sshd-session[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:54.098302 systemd-logind[1485]: New session 23 of user core. Jan 13 20:46:54.104791 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:46:54.223430 sshd[6499]: Connection closed by 10.0.0.1 port 60000 Jan 13 20:46:54.223911 sshd-session[6497]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:54.227706 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:60000.service: Deactivated successfully. Jan 13 20:46:54.230000 kubelet[2681]: E0113 20:46:54.229964 2681 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:54.231911 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:46:54.232718 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:46:54.233785 systemd-logind[1485]: Removed session 23.